CN111507231A - Automatic detection method and system for correctness of process steps - Google Patents

Automatic detection method and system for correctness of process steps Download PDF

Info

Publication number
CN111507231A
CN111507231A CN202010283239.XA CN202010283239A CN111507231A CN 111507231 A CN111507231 A CN 111507231A CN 202010283239 A CN202010283239 A CN 202010283239A CN 111507231 A CN111507231 A CN 111507231A
Authority
CN
China
Prior art keywords
human body
information
process step
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010283239.XA
Other languages
Chinese (zh)
Other versions
CN111507231B (en
Inventor
曹恩华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengjing Intelligent Technology Jiaxing Co ltd
Original Assignee
Sany Heavy Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Industry Co Ltd filed Critical Sany Heavy Industry Co Ltd
Priority to CN202010283239.XA priority Critical patent/CN111507231B/en
Publication of CN111507231A publication Critical patent/CN111507231A/en
Application granted granted Critical
Publication of CN111507231B publication Critical patent/CN111507231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Psychiatry (AREA)
  • Educational Administration (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for automatically detecting correctness of process steps, which are applied to a server and comprise the following steps: performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the process step execution process; determining a human body region and a working region in an image to be detected based on the target category; identifying a human body region and a working region based on a deep learning method to respectively obtain human body posture information and working task information; determining process step information based on the human body posture information and the work task information; the deep learning-based method extracts a target feature vector of the process step information and judges whether the process step is executed correctly based on the target feature vector. The invention solves the technical problems of high cost and incapability of detecting the process steps in real time in the prior art.

Description

Automatic detection method and system for correctness of process steps
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for automatically detecting correctness of process steps.
Background
Whether the process steps are correctly executed in the actual production process has a decisive influence on the production quality and the production safety, and the incorrect process execution steps not only have an influence on the product quality, but also can cause serious safety accidents in many cases, so that how to check whether the execution sequence of the process steps of workers is correct is always the key point and pain point of the current factory management. At present, the main scheme for checking the correctness of the process steps is mainly realized by manual spot check or manual experience check of workers, so that the following technical problems exist: on one hand, the cost is relatively high, and on the other hand, the real-time detection of the whole process is difficult to realize, so that the errors in the process steps are difficult to find in time.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for automatically detecting correctness of process steps, so as to alleviate the technical problems of high cost and incapability of detecting the process steps in real time in the prior art.
In a first aspect, an embodiment of the present invention provides an automatic detection method for correctness of process steps, which is applied to a server, and includes: performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video comprising a process step execution process; determining a human body region and a working region in the image to be detected based on the target category; identifying the human body region and the working region based on a deep learning method to respectively obtain human body posture information in the human body region and working task information in the working region; determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information; the deep learning-based method extracts a target feature vector of the process step information and judges whether the process step is correctly performed based on the target feature vector.
Further, recognizing the human body region and the working region based on a deep learning method to respectively obtain human body posture information in the human body region and working task information in the working region, including: extracting a first feature vector of the human body region; recognizing the first feature vector based on a deep learning method to obtain human body posture information in the human body region; extracting a second feature vector of the working area; and identifying the second characteristic vector based on a deep learning method to obtain the work task information in the work area.
Further, determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information, wherein the process step information comprises the following steps: determining the execution time of each process step based on the work task information; and determining process step information executed by a worker in the image to be detected based on the execution time of each process step and the human body posture information.
Further, determining whether the process step is performed correctly based on the target feature vector includes: comparing the target characteristic vector with characteristic vectors in a preset characteristic vector library to obtain a comparison result; and judging whether the process steps are correctly executed or not based on the comparison result.
In a second aspect, an embodiment of the present invention further provides an automatic detection system for correctness of process steps, which is applied to a server, and includes: the system comprises a semantic segmentation module, a first determination module, an identification module, a second determination module and a judgment module, wherein the semantic segmentation module is used for performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video comprising a process step execution process; the first determination module is used for determining a human body area and a working area in the image to be detected based on the target category; the recognition module is used for recognizing the human body region and the working region based on a deep learning method to respectively obtain human body posture information in the human body region and working task information in the working region; the second determining module is used for determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information; the judging module is used for extracting a target feature vector of the process step information based on a deep learning method and judging whether the process step is executed correctly or not based on the target feature vector.
Further, the identification module includes: the human body recognition device comprises a first recognition unit and a second recognition unit, wherein the first recognition unit is used for extracting a first feature vector of the human body region; recognizing the first feature vector based on a deep learning method to obtain human body posture information in the human body region; the second identification unit is used for extracting a second feature vector of the working area; and identifying the second characteristic vector based on a deep learning method to obtain the work task information in the work area.
Further, the second determining module is further configured to: determining the execution time of each process step based on the work task information; and determining process step information executed by a worker in the image to be detected based on the execution time of each process step and the human body posture information.
Further, the determining module is further configured to: comparing the target characteristic vector with characteristic vectors in a preset characteristic vector library to obtain a comparison result; and judging whether the process steps are correctly executed or not based on the comparison result.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to execute the method according to the first aspect.
According to the automatic detection method and system for the correctness of the process steps, the human body posture information and the work task information in the process steps are deeply fused through a deep learning method, then the process step information is determined according to the human body posture information and the work task information, the feature vectors of the process step information are extracted based on the deep learning method, and finally whether the process steps are correctly executed or not is judged based on the feature vectors. According to the invention, the execution condition of the process steps can be automatically detected through the video containing the process step execution process acquired in real time, manual detection is not needed, the labor cost is reduced, and the technical problems of high cost and incapability of detecting the process steps in real time in the prior art are further solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for automatically detecting correctness of process steps according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an automated inspection system for correctness of process steps according to an embodiment of the present invention;
fig. 3 is a schematic diagram of another automatic detection system for correctness of process steps according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
fig. 1 is a flowchart of a method for automatically detecting correctness of process steps, which is applied to a server according to an embodiment of the present invention. As shown in fig. 1, the method specifically includes the following steps:
step S102, performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the process step execution process. Optionally, the target class of pixels comprises any one of: human body, work area and background.
Optionally, a video of the process step execution process is obtained by the motion picture recording device. For example, video information of a process in which a worker is performing a process step in a factory may be acquired through a monitor.
Alternatively, the deep learning method used in the embodiment of the present invention is a method for object detection and segmentation, and may be, for example, yolo model, makrcnn model, fastrrcnn model, ssd model or retinet model.
And step S104, determining a human body region and a working region in the image to be detected based on the target category.
Wherein, the human body area and the working area are circumscribed rectangle frames.
And S106, identifying the human body region and the working region based on a deep learning method, and respectively obtaining human body posture information in the human body region and working task information in the working region.
And step S108, determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information.
And step S110, extracting a target characteristic vector of the process step information based on the deep learning method, and judging whether the process step is executed correctly based on the target characteristic vector.
The invention provides an automatic detection method for correctness of process steps, which deeply fuses human body posture information and work task information in the process steps through a deep learning method, then determines process step information according to the human body posture information and the work task information, extracts characteristic vectors of the process step information based on the deep learning method, and finally judges whether the process steps are correctly executed based on the characteristic vectors. According to the invention, the execution condition of the process steps can be automatically detected through the video containing the process step execution process acquired in real time, manual detection is not needed, the labor cost is reduced, and the technical problems of high cost and incapability of detecting the process steps in real time in the prior art are further solved.
Optionally, step S106 includes the following steps:
step S1061, extracting a first feature vector of the human body area;
step S1062, identifying the first feature vector based on a deep learning method to obtain human body posture information in a human body region;
step S1063, extracting a second feature vector of the working area;
and step S1064, identifying the second feature vector based on a deep learning method to obtain work task information in the work area.
Optionally, step S108 includes the steps of:
step S1081, determining the execution time of each process step based on the work task information;
step S1082 determines process step information performed by the worker in the image to be detected based on the execution time and the human body posture information of each process step.
Specifically, time information of the process step is first determined based on the job task information, wherein the time information includes an execution time and a time length of the process step. Then, a video of the process step within the execution time is obtained, a plurality of image frames in the video are extracted, human body posture information in the image frames is obtained by a human body tracking method, the human body posture information is combined into time sequence video information of the process step according to the time sequence of the image frames, and the time sequence video information of the process step is determined as process step information.
Optionally, step S110 includes the steps of:
step 1101, comparing the target characteristic vector with characteristic vectors in a preset characteristic vector library to obtain a comparison result;
step S1102, determining whether the process steps are correctly executed based on the comparison result.
In the embodiment of the present invention, the predetermined feature vector library is a set including a plurality of feature vectors, and for example, the predetermined feature vector library may include feature vectors when the process step is correctly performed and feature vectors when the process step is incorrectly performed under different conditions. The target characteristic vectors are compared with a plurality of characteristic vectors in a preset characteristic vector library respectively to obtain comparison results, and the characteristic vector closest to the target characteristic vector is searched in the preset characteristic vector library according to the comparison results, so that whether the process step corresponding to the target characteristic vector is executed correctly or not can be determined according to the searched characteristic vector. Specifically, if in the comparison result, it is determined that the feature vector closest to the target feature vector is the feature vector when the process step is correctly executed, it is determined that the process step is correctly executed; if the feature vector closest to the target feature vector is determined to be the feature vector when the process step is executed in error in the comparison result, the process step is determined not to be executed correctly, and a specific error step can be identified through the feature vector when the process step is executed in error corresponding to the target feature vector.
The embodiment of the invention can produce the following technical effects:
the invention carries out deep fusion on the characteristic vectors such as human body posture information, working areas, working task information and the like in the process steps, enhances the distinguishability and robustness of the characteristic vectors, and ensures that the identification and detection of the correctness of the final process steps are more accurate.
According to the invention, the time sequence video information of the process steps in the complete work task is obtained by combining the human body tracking method with the time information of the work task, and the introduction of the complete time sequence information can not only judge whether the process steps are executed correctly, but also identify specific wrong steps.
Example two:
fig. 2 is a schematic diagram of an automatic detection system for correctness of process steps, which is applied to a server according to an embodiment of the present invention. As shown in fig. 2, the system includes: a semantic segmentation module 10, a first determination module 20, a recognition module 30, a second determination module 40 and a judgment module 50.
Specifically, the semantic segmentation module 10 is configured to perform semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the process step execution process.
A first determining module 20, configured to determine a human body region and a working region in an image to be detected based on a target class.
The recognition module 30 is configured to recognize the human body region and the working region based on a deep learning method, and obtain human body posture information in the human body region and work task information in the working region, respectively.
And the second determining module 40 is used for determining the process step information executed by the workers in the image to be detected based on the human body posture information and the work task information.
And the judging module 50 is used for extracting a characteristic vector of the process step information based on the deep learning method and judging whether the process step is executed correctly based on the characteristic vector.
The automatic detection system for the correctness of the process steps, provided by the invention, carries out deep fusion on the human body posture information and the work task information in the process steps through a deep learning method, then determines the process step information according to the human body posture information and the work task information, extracts the characteristic vector of the process step information based on the deep learning method, and finally judges whether the process steps are executed correctly or not based on the characteristic vector. According to the invention, the execution condition of the process steps can be automatically detected through the video containing the process step execution process acquired in real time, manual detection is not needed, the labor cost is reduced, and the technical problems of high cost and incapability of detecting the process steps in real time in the prior art are further solved.
Alternatively, fig. 3 is a schematic diagram of another automatic detection system for correctness of process steps according to an embodiment of the present invention, and as shown in fig. 3, the identification module 30 includes: a first recognition unit 31 and a second recognition unit 32.
Specifically, the first recognition unit 31 is configured to extract a first feature vector of the human body region; and identifying the first feature vector based on a deep learning method to obtain human body posture information in the human body region.
A second recognition unit 32, configured to extract a second feature vector of the working area; and identifying the second characteristic vector based on a deep learning method to obtain the work task information in the work area.
Optionally, the second determining module 40 is further configured to:
determining the execution time of each process step based on the work task information; and determining process step information performed by a worker in the image to be detected based on the execution time and the human body posture information of each process step.
Optionally, the determining module 50 is further configured to:
comparing the preset feature vector with feature vectors in a preset feature vector library to obtain a comparison result; and judging whether the process steps are correctly executed or not based on the comparison result.
The embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the method in the first embodiment are implemented.
The embodiment of the invention also provides a computer readable medium with a non-volatile program code executable by a processor, wherein the program code causes the processor to execute the method in the first embodiment.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An automatic detection method for correctness of process steps is applied to a server and comprises the following steps:
performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video comprising a process step execution process;
determining a human body region and a working region in the image to be detected based on the target category;
identifying the human body region and the working region based on a deep learning method to respectively obtain human body posture information in the human body region and working task information in the working region;
determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information;
the deep learning-based method extracts a target feature vector of the process step information and judges whether the process step is executed correctly based on the target feature vector.
2. The method according to claim 1, wherein the identifying the human body region and the working region based on a deep learning method to obtain human body posture information in the human body region and work task information in the working region respectively comprises:
extracting a first feature vector of the human body region;
recognizing the first feature vector based on a deep learning method to obtain human body posture information in the human body region;
extracting a second feature vector of the working area;
and identifying the second characteristic vector based on a deep learning method to obtain the work task information in the work area.
3. The method of claim 1, wherein determining process step information performed by a worker in the image to be detected based on the body pose information and the work task information comprises:
determining the execution time of each process step based on the work task information;
and determining process step information executed by a worker in the image to be detected based on the execution time of each process step and the human body posture information.
4. The method of claim 1, wherein determining whether the process step was performed correctly based on the target eigenvector comprises:
comparing the target characteristic vector with characteristic vectors in a preset characteristic vector library to obtain a comparison result;
and judging whether the process steps are correctly executed or not based on the comparison result.
5. An automatic detection system for correctness of process steps is applied to a server and comprises the following steps: a semantic segmentation module, a first determination module, an identification module, a second determination module and a judgment module, wherein,
the semantic segmentation module is used for performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video comprising a process step execution process;
the first determination module is used for determining a human body area and a working area in the image to be detected based on the target category;
the recognition module is used for recognizing the human body region and the working region based on a deep learning method to respectively obtain human body posture information in the human body region and working task information in the working region;
the second determining module is used for determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information;
the judging module is used for extracting a target characteristic vector of the process step information based on a deep learning method and judging whether the process step is executed correctly or not based on the target characteristic vector.
6. The system of claim 5, wherein the identification module comprises: a first identification unit and a second identification unit, wherein,
the first identification unit is used for extracting a first feature vector of the human body region; recognizing the first feature vector based on a deep learning method to obtain human body posture information in the human body region;
the second identification unit is used for extracting a second feature vector of the working area; and identifying the second characteristic vector based on a deep learning method to obtain the work task information in the work area.
7. The system of claim 5, wherein the second determining module is further configured to:
determining the execution time of each process step based on the work task information;
and determining process step information executed by a worker in the image to be detected based on the execution time of each process step and the human body posture information.
8. The system of claim 5, wherein the determining module is further configured to:
comparing the target characteristic vector with characteristic vectors in a preset characteristic vector library to obtain a comparison result;
and judging whether the process steps are correctly executed or not based on the comparison result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of the preceding claims 1 to 4 are implemented when the computer program is executed by the processor.
10. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1-4.
CN202010283239.XA 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps Active CN111507231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010283239.XA CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010283239.XA CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Publications (2)

Publication Number Publication Date
CN111507231A true CN111507231A (en) 2020-08-07
CN111507231B CN111507231B (en) 2023-06-23

Family

ID=71876007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010283239.XA Active CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Country Status (1)

Country Link
CN (1) CN111507231B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861823A (en) * 2021-04-06 2021-05-28 南京工业大学 Method and device for visual detection and positioning of workpiece installation key process
CN114821478A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100246894A1 (en) * 2009-03-25 2010-09-30 Naoki Koike Component assembly inspection method and component assembly inspection apparatus
CN103197640A (en) * 2013-03-26 2013-07-10 江苏润达光伏科技有限公司 Intelligent management and control system and method of manufacturing technique
CN104935879A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Multi-View Human Detection Using Semi-Exhaustive Search
CN106257510A (en) * 2016-03-25 2016-12-28 深圳增强现实技术有限公司 Operational data processing method based on Intelligent worn device and system
CN108198221A (en) * 2018-01-23 2018-06-22 平顶山学院 A kind of automatic stage light tracking system and method based on limb action
CN108764333A (en) * 2018-05-28 2018-11-06 北京纵目安驰智能科技有限公司 One kind being based on the cascade semantic segmentation method of time series, system, terminal and storage medium
CN109166360A (en) * 2018-10-09 2019-01-08 丰羽教育科技(上海)有限公司 A kind of tutoring system and its method of safety operation equipment
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN109492602A (en) * 2018-11-21 2019-03-19 华侨大学 A kind of process clocking method and system based on human body limb language
CN110232379A (en) * 2019-06-03 2019-09-13 上海眼控科技股份有限公司 A kind of vehicle attitude detection method and system
CN110674712A (en) * 2019-09-11 2020-01-10 苏宁云计算有限公司 Interactive behavior recognition method and device, computer equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100246894A1 (en) * 2009-03-25 2010-09-30 Naoki Koike Component assembly inspection method and component assembly inspection apparatus
CN103197640A (en) * 2013-03-26 2013-07-10 江苏润达光伏科技有限公司 Intelligent management and control system and method of manufacturing technique
CN104935879A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Multi-View Human Detection Using Semi-Exhaustive Search
CN106257510A (en) * 2016-03-25 2016-12-28 深圳增强现实技术有限公司 Operational data processing method based on Intelligent worn device and system
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN108198221A (en) * 2018-01-23 2018-06-22 平顶山学院 A kind of automatic stage light tracking system and method based on limb action
CN108764333A (en) * 2018-05-28 2018-11-06 北京纵目安驰智能科技有限公司 One kind being based on the cascade semantic segmentation method of time series, system, terminal and storage medium
CN109166360A (en) * 2018-10-09 2019-01-08 丰羽教育科技(上海)有限公司 A kind of tutoring system and its method of safety operation equipment
CN109492602A (en) * 2018-11-21 2019-03-19 华侨大学 A kind of process clocking method and system based on human body limb language
CN110232379A (en) * 2019-06-03 2019-09-13 上海眼控科技股份有限公司 A kind of vehicle attitude detection method and system
CN110674712A (en) * 2019-09-11 2020-01-10 苏宁云计算有限公司 Interactive behavior recognition method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HU S. JACK 等: "Effect of cycle to cycle task variations in mixed-model assembly lines on workers’ upper body and lower back exertions and recovery time: A simulated assembly study" *
汪嘉杰 等: "基于视觉的航天电连接器的智能识别与装配引导" *
肖鸿: "面向复杂产品装配现场的移动三维模型关键技术研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861823A (en) * 2021-04-06 2021-05-28 南京工业大学 Method and device for visual detection and positioning of workpiece installation key process
CN114821478A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis
CN114821478B (en) * 2022-05-05 2023-01-13 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis

Also Published As

Publication number Publication date
CN111507231B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109325538B (en) Object detection method, device and computer-readable storage medium
CN110674712A (en) Interactive behavior recognition method and device, computer equipment and storage medium
CN111310645A (en) Overflow bin early warning method, device, equipment and storage medium for cargo accumulation amount
CN111507231B (en) Automatic detection method and system for correctness of process steps
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN111339901B (en) Image-based intrusion detection method and device, electronic equipment and storage medium
CN111368682A (en) Method and system for detecting and identifying station caption based on faster RCNN
CN111462188A (en) Camera movement detection method and system
CN113095292A (en) Gesture recognition method and device, electronic equipment and readable storage medium
CN110929555B (en) Face recognition method and electronic device using same
KR101774815B1 (en) Cursor location recognition test automation system and test automation method using the same
CN112104838B (en) Image distinguishing method, monitoring camera and monitoring camera system thereof
CN116630809A (en) Geological radar data automatic identification method and system based on intelligent image analysis
CN113536842A (en) Electric power operator safety dressing identification method and device
CN116229502A (en) Image-based tumbling behavior identification method and equipment
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium
CN112927269A (en) Map construction method and device based on environment semantics and computer equipment
CN111881733A (en) Worker operation step specification visual identification judgment and guidance method and system
CN114092542B (en) Bolt measurement method and system based on two-dimensional vision
CN110795705A (en) Track data processing method, device, equipment and storage medium
CN116884034B (en) Object identification method and device
CN113298811A (en) Automatic counting method, device and equipment for number of people in intelligent classroom and storage medium
CN116188814A (en) Stroke area-oriented identification method and system
CN117456212A (en) Image matching method, matching module and chip
CN114092542A (en) Bolt measuring method and system based on two-dimensional vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230105

Address after: 314506 room 116, building 4, No. 288, development avenue, Tongxiang Economic Development Zone, Tongxiang City, Jiaxing City, Zhejiang Province

Applicant after: Shengjing Intelligent Technology (Jiaxing) Co.,Ltd.

Address before: 102200 5th floor, building 6, No.8 Beiqing Road, Changping District, Beijing

Applicant before: SANY HEAVY INDUSTRY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant