CN111507231B - Automatic detection method and system for correctness of process steps - Google Patents

Automatic detection method and system for correctness of process steps Download PDF

Info

Publication number
CN111507231B
CN111507231B CN202010283239.XA CN202010283239A CN111507231B CN 111507231 B CN111507231 B CN 111507231B CN 202010283239 A CN202010283239 A CN 202010283239A CN 111507231 B CN111507231 B CN 111507231B
Authority
CN
China
Prior art keywords
human body
information
feature vector
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010283239.XA
Other languages
Chinese (zh)
Other versions
CN111507231A (en
Inventor
曹恩华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengjing Intelligent Technology Jiaxing Co ltd
Original Assignee
Shengjing Intelligent Technology Jiaxing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shengjing Intelligent Technology Jiaxing Co ltd filed Critical Shengjing Intelligent Technology Jiaxing Co ltd
Priority to CN202010283239.XA priority Critical patent/CN111507231B/en
Publication of CN111507231A publication Critical patent/CN111507231A/en
Application granted granted Critical
Publication of CN111507231B publication Critical patent/CN111507231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention provides a method and a system for automatically detecting correctness of procedure steps, which are applied to a server and comprise the following steps: performing semantic segmentation operation on the image to be detected based on a deep learning method to obtain the target category of each pixel in the image to be detected; the image to be detected is an image frame in the video containing the process step execution process; determining a human body area and a working area in an image to be detected based on the target category; identifying a human body area and a working area based on a deep learning method to respectively obtain human body posture information and working task information; determining process step information based on the human body posture information and the work task information; the deep learning-based method extracts a target feature vector of the process step information and determines whether the process step is correctly performed based on the target feature vector. The invention relieves the technical problems of high cost and incapability of detecting the process steps in real time in the prior art.

Description

Automatic detection method and system for correctness of process steps
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for automatically detecting correctness of process steps.
Background
Whether the process steps are correctly executed in the actual production process has a decisive influence on the production quality and the production safety, and incorrect process execution steps not only have an influence on the quality of products, but also have serious safety accidents in many cases, so how to check whether the execution sequence of the process steps of workers is correct or not always is the key point and the pain point of the current factory management. The main scheme for checking the correctness of the process steps at present is mainly realized by manual spot check or self experience check of workers, so that the following technical problems exist: on one hand, the cost is relatively high, and on the other hand, the whole process is difficult to detect in real time, so that errors in the process steps are difficult to discover in time.
Disclosure of Invention
In view of the above, the present invention aims to provide an automatic detection method and system for correctness of process steps, so as to alleviate the technical problems of high cost and incapability of detecting process steps in real time in the prior art.
In a first aspect, an embodiment of the present invention provides a method for automatically detecting correctness of a process step, which is applied to a server, and includes: performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target class of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the execution process of the procedure step; determining a human body area and a working area in the image to be detected based on the target category; identifying the human body area and the working area based on a deep learning method to respectively obtain human body posture information in the human body area and working task information in the working area; determining procedure step information executed by a worker in the image to be detected based on the human body posture information and the work task information; and extracting a target feature vector of the process step information based on a deep learning method, and judging whether the process step is correctly executed or not based on the target feature vector.
Further, the method for identifying the human body region and the working region based on the deep learning method respectively obtains human body posture information in the human body region and working task information in the working region, including: extracting a first feature vector of the human body region; identifying the first feature vector based on a deep learning method to obtain human body posture information in the human body area; extracting a second feature vector of the working area; and identifying the second feature vector based on a deep learning method to obtain the work task information in the work area.
Further, determining process step information performed by a worker in the image to be detected based on the human body posture information and the work task information, includes: determining the execution time of each procedure step based on the work task information; and determining the process step information executed by the worker in the image to be detected based on the execution time of each process step and the human body posture information.
Further, determining whether the process step is performed correctly based on the target feature vector includes: comparing the target feature vector with feature vectors in a preset feature vector library to obtain a comparison result; and judging whether the working procedure step is correctly executed or not based on the comparison result.
In a second aspect, an embodiment of the present invention further provides a process step correctness automatic detection system, which is applied to a server, and includes: the device comprises a semantic segmentation module, a first determination module, an identification module, a second determination module and a judgment module, wherein the semantic segmentation module is used for carrying out semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target category of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the execution process of the procedure step; the first determining module is used for determining a human body area and a working area in the image to be detected based on the target category; the recognition module is used for recognizing the human body area and the working area based on a deep learning method to respectively obtain human body posture information in the human body area and working task information in the working area; the second determining module is used for determining procedure step information executed by a worker in the image to be detected based on the human body posture information and the work task information; the judging module is used for extracting the target feature vector of the process step information based on a deep learning method and judging whether the process step is correctly executed or not based on the target feature vector.
Further, the identification module includes: the device comprises a first identification unit and a second identification unit, wherein the first identification unit is used for extracting a first characteristic vector of the human body area; identifying the first feature vector based on a deep learning method to obtain human body posture information in the human body area; the second recognition unit is used for extracting a second feature vector of the working area; and identifying the second feature vector based on a deep learning method to obtain the work task information in the work area.
Further, the second determining module is further configured to: determining the execution time of each procedure step based on the work task information; and determining the process step information executed by the worker in the image to be detected based on the execution time of each process step and the human body posture information.
Further, the judging module is further configured to: comparing the target feature vector with feature vectors in a preset feature vector library to obtain a comparison result; and judging whether the working procedure step is correctly executed or not based on the comparison result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the computer program to implement the steps of the method described in the first aspect.
In a fourth aspect, embodiments of the present invention also provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of the first aspect.
According to the automatic detection method and system for correctness of the working procedure, human body posture information and work task information in the working procedure are subjected to depth fusion through a deep learning method, then the working procedure information is determined according to the human body posture information and the work task information, feature vectors of the working procedure information are extracted through a deep learning method, and finally whether the working procedure is executed correctly is judged according to the feature vectors. The invention can automatically detect the execution condition of the working procedure step through the video containing the working procedure step execution process obtained in real time, does not need manual detection, reduces the labor cost, and further relieves the technical problems of high cost and incapability of detecting the working procedure step in real time in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an automated detection method for correctness of process steps according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an automated detection system for correctness of process steps according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another automated detection system for correctness of process steps according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
fig. 1 is a flowchart of a process step correctness automatic detection method provided in accordance with an embodiment of the present invention, and the method is applied to a server. As shown in fig. 1, the method specifically includes the following steps:
step S102, carrying out semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target class of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the process step execution procedure. Optionally, the target class of pixels includes any one of: human body, working area and background.
Alternatively, the video of the process step execution process is acquired by a moving image recording apparatus. For example, video information of a process in which a worker is carrying out a process step in a factory may be acquired by a monitor.
Alternatively, the method of deep learning used in the embodiments of the present invention is some method for object detection and segmentation, and may be, for example, yolo model, makrcnn model, fastercnn model, ssd model or retinanet model.
Step S104, determining the human body area and the working area in the image to be detected based on the target category.
Wherein the human body area and the working area are circumscribed rectangular frames.
And step S106, recognizing the human body area and the working area based on the deep learning method to respectively obtain the human body posture information in the human body area and the working task information in the working area.
Step S108, determining process step information executed by workers in the image to be detected based on the human body posture information and the work task information.
Step S110, extracting a target feature vector of the process step information based on the deep learning method, and judging whether the process step is correctly executed based on the target feature vector.
According to the automatic detection method for correctness of the working procedure, human body posture information and work task information in the working procedure are subjected to depth fusion through a deep learning method, then working procedure information is determined according to the human body posture information and the work task information, feature vectors of the working procedure information are extracted through a deep learning method, and finally whether the working procedure is executed correctly is judged according to the feature vectors. The invention can automatically detect the execution condition of the working procedure step through the video containing the working procedure step execution process obtained in real time, does not need manual detection, reduces the labor cost, and further relieves the technical problems of high cost and incapability of detecting the working procedure step in real time in the prior art.
Optionally, step S106 includes the steps of:
step S1061, extracting a first feature vector of a human body region;
step S1062, identifying the first feature vector based on a deep learning method to obtain human body posture information in a human body area;
step S1063, extracting a second feature vector of the working area;
step S1064, the second feature vector is identified based on the deep learning method, so as to obtain the information of the work task in the work area.
Optionally, step S108 includes the steps of:
step S1081, determining execution time of each process step based on the work task information;
step S1082 determines process step information performed by a worker in the image to be detected based on the execution time of each process step and the human body posture information.
Specifically, first, time information of the process step is determined based on the work task information, wherein the time information includes execution time and time length of the process step. Then, acquiring videos of the process steps in execution time, extracting a plurality of image frames in the videos, acquiring human body posture information in the plurality of image frames by using a human body tracking method, combining the human body posture information into time sequence video information of the process steps according to time sequences of the plurality of image frames, and determining the time sequence video information of the process steps as process step information.
Optionally, step S110 includes the steps of:
step S1101, comparing the target feature vector with feature vectors in a preset feature vector library to obtain a comparison result;
step S1102, based on the comparison result, determines whether the process step is performed correctly.
In the embodiment of the present invention, the preset feature vector library is a set including a plurality of feature vectors, for example, may include feature vectors when the process steps are performed correctly, and feature vectors when the process steps are performed incorrectly in different situations. The target feature vectors are respectively compared with a plurality of feature vectors in the preset feature vector library to obtain a comparison result, and the feature vector closest to the target feature vector is searched in the preset feature vector library according to the comparison result, so that whether the process step corresponding to the target feature vector is correctly executed or not can be determined according to the searched feature vector. Specifically, if, in the comparison result, the feature vector closest to the target feature vector is determined to be the feature vector when the process step is correctly performed, the process step is determined to be correctly performed; if, in the comparison result, the feature vector closest to the target feature vector is determined to be the feature vector at the time of performing the process step in error, it is determined that the process step is not performed correctly, and a specific erroneous step can also be identified by the feature vector at the time of performing the process step in error corresponding to the target feature vector.
The embodiment of the invention can produce the following technical effects:
according to the invention, the feature vectors such as the human body posture information, the working area and the working task information in the working procedure steps are subjected to depth fusion, so that the distinguishing property and the robustness of the feature vectors are enhanced, and the final identification and detection of the correctness of the working procedure steps are more accurate.
According to the invention, the time sequence video information of the working steps in the complete working task is obtained by combining the time information of the working task through the human body tracking method, and whether the working steps are executed correctly or not can be judged through the introduction of the complete time sequence information, and the step with specific errors can be identified.
Embodiment two:
fig. 2 is a schematic diagram of an automated process step correctness detection system according to an embodiment of the present invention, which is applied to a server. As shown in fig. 2, the system includes: the system comprises a semantic segmentation module 10, a first determination module 20, an identification module 30, a second determination module 40 and a judgment module 50.
Specifically, the semantic segmentation module 10 is configured to perform semantic segmentation operation on an image to be detected based on a deep learning method, so as to obtain a target class of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the process step execution procedure.
The first determining module 20 is configured to determine a human body region and a working region in the image to be detected based on the target class.
The recognition module 30 is configured to recognize a human body region and a working region based on a deep learning method, and obtain human body posture information in the human body region and work task information in the working region, respectively.
The second determining module 40 is configured to determine, based on the human body posture information and the task information, process step information that is performed by a worker in the image to be detected.
A judging module 50 for extracting feature vectors of the process step information based on the deep learning method and judging whether the process step is correctly performed based on the feature vectors.
According to the automatic detection system for correctness of the working procedure, human body posture information and work task information in the working procedure are subjected to depth fusion through a deep learning method, then the working procedure information is determined according to the human body posture information and the work task information, feature vectors of the working procedure information are extracted through a deep learning method, and finally whether the working procedure is executed correctly is judged according to the feature vectors. The invention can automatically detect the execution condition of the working procedure step through the video containing the working procedure step execution process obtained in real time, does not need manual detection, reduces the labor cost, and further relieves the technical problems of high cost and incapability of detecting the working procedure step in real time in the prior art.
Optionally, fig. 3 is a schematic diagram of another automatic detection system for correctness of process steps provided in accordance with an embodiment of the present invention, and as shown in fig. 3, the identification module 30 includes: a first recognition unit 31 and a second recognition unit 32.
Specifically, the first recognition unit 31 is configured to extract a first feature vector of the human body region; and identifying the first feature vector based on a deep learning method to obtain the human body posture information in the human body area.
A second recognition unit 32 for extracting a second feature vector of the working area; and identifying the second feature vector based on a deep learning method to obtain the work task information in the work area.
Optionally, the second determining module 40 is further configured to:
determining execution time of each process step based on the work task information; and determining the process step information executed by the worker in the image to be detected based on the execution time of each process step and the human body posture information.
Optionally, the judging module 50 is further configured to:
comparing the preset feature vector with the feature vectors in the preset feature vector library to obtain a comparison result; and judging whether the process steps are correctly executed or not based on the comparison result.
The embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method in the first embodiment.
The present invention also provides a computer-readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of the first embodiment.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. An automatic detection method for correctness of procedure steps is characterized by being applied to a server and comprising the following steps:
performing semantic segmentation operation on an image to be detected based on a deep learning method to obtain a target class of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the execution process of the procedure step;
determining a human body area and a working area in the image to be detected based on the target category;
identifying the human body area and the working area based on a deep learning method to respectively obtain human body posture information in the human body area and working task information in the working area;
determining process step information executed by a worker in the image to be detected based on the human body posture information and the work task information, including: determining time information of the process steps based on the work task information, wherein the time information comprises execution time and time length of the process steps; then, acquiring videos of the process steps in execution time, extracting a plurality of image frames in the videos, acquiring human body posture information in the plurality of image frames by using a human body tracking method, combining the human body posture information into time sequence video information of the process steps according to time sequences of the plurality of image frames, and determining the time sequence video information of the process steps as process step information;
and extracting a target feature vector of the process step information based on a deep learning method, and judging whether the process step is correctly executed or not based on the target feature vector.
2. The method of claim 1, wherein identifying the human body region and the work region based on a deep learning method to obtain human body pose information in the human body region and work task information in the work region, respectively, comprises:
extracting a first feature vector of the human body region;
identifying the first feature vector based on a deep learning method to obtain human body posture information in the human body area;
extracting a second feature vector of the working area;
and identifying the second feature vector based on a deep learning method to obtain the work task information in the work area.
3. The method of claim 1, wherein determining whether the process step was performed correctly based on the target feature vector comprises:
comparing the target feature vector with feature vectors in a preset feature vector library to obtain a comparison result;
and judging whether the working procedure step is correctly executed or not based on the comparison result.
4. An automated process step correctness detection system, applied to a server, comprising: the device comprises a semantic segmentation module, a first determination module, an identification module, a second determination module and a judgment module, wherein,
the semantic segmentation module is used for carrying out semantic segmentation operation on the image to be detected based on a deep learning method to obtain the target category of each pixel in the image to be detected; the image to be detected is an image frame in a video containing the execution process of the procedure step;
the first determining module is used for determining a human body area and a working area in the image to be detected based on the target category;
the identification module is used for identifying the human body area and the working area based on a deep learning method,
respectively obtaining human body posture information in the human body area and work task information in the work area;
the second determining module is configured to determine, based on the human body posture information and the task information, process step information performed by a worker in the image to be detected, including: determining time information of the process steps based on the work task information, wherein the time information comprises execution time and time length of the process steps; then, acquiring videos of the process steps in execution time, extracting a plurality of image frames in the videos, acquiring human body posture information in the plurality of image frames by using a human body tracking method, combining the human body posture information into time sequence video information of the process steps according to time sequences of the plurality of image frames, and determining the time sequence video information of the process steps as process step information;
the judging module is used for extracting the target feature vector of the process step information based on a deep learning method and judging whether the process step is correctly executed or not based on the target feature vector.
5. The system of claim 4, wherein the identification module comprises: a first recognition unit and a second recognition unit, wherein,
the first recognition unit is used for extracting a first feature vector of the human body area; identifying the first feature vector based on a deep learning method to obtain human body posture information in the human body area;
the second recognition unit is used for extracting a second feature vector of the working area; deep learning-based method pair
And identifying the second feature vector to obtain the work task information in the work area.
6. The system of claim 4, wherein the determination module is further configured to:
comparing the target feature vector with feature vectors in a preset feature vector library to obtain a comparison result;
and judging whether the working procedure step is correctly executed or not based on the comparison result.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1 to 3 when the computer program is executed.
8. A computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of claims 1-3.
CN202010283239.XA 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps Active CN111507231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010283239.XA CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010283239.XA CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Publications (2)

Publication Number Publication Date
CN111507231A CN111507231A (en) 2020-08-07
CN111507231B true CN111507231B (en) 2023-06-23

Family

ID=71876007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010283239.XA Active CN111507231B (en) 2020-04-10 2020-04-10 Automatic detection method and system for correctness of process steps

Country Status (1)

Country Link
CN (1) CN111507231B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861823A (en) * 2021-04-06 2021-05-28 南京工业大学 Method and device for visual detection and positioning of workpiece installation key process
CN114821478B (en) * 2022-05-05 2023-01-13 北京容联易通信息技术有限公司 Process flow detection method and system based on video intelligent analysis

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197640A (en) * 2013-03-26 2013-07-10 江苏润达光伏科技有限公司 Intelligent management and control system and method of manufacturing technique
CN104935879A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Multi-View Human Detection Using Semi-Exhaustive Search
CN106257510A (en) * 2016-03-25 2016-12-28 深圳增强现实技术有限公司 Operational data processing method based on Intelligent worn device and system
CN108198221A (en) * 2018-01-23 2018-06-22 平顶山学院 A kind of automatic stage light tracking system and method based on limb action
CN108764333A (en) * 2018-05-28 2018-11-06 北京纵目安驰智能科技有限公司 One kind being based on the cascade semantic segmentation method of time series, system, terminal and storage medium
CN109166360A (en) * 2018-10-09 2019-01-08 丰羽教育科技(上海)有限公司 A kind of tutoring system and its method of safety operation equipment
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN109492602A (en) * 2018-11-21 2019-03-19 华侨大学 A kind of process clocking method and system based on human body limb language
CN110232379A (en) * 2019-06-03 2019-09-13 上海眼控科技股份有限公司 A kind of vehicle attitude detection method and system
CN110674712A (en) * 2019-09-11 2020-01-10 苏宁云计算有限公司 Interactive behavior recognition method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5376220B2 (en) * 2009-03-25 2013-12-25 富士ゼロックス株式会社 Component assembly inspection method and component assembly inspection device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197640A (en) * 2013-03-26 2013-07-10 江苏润达光伏科技有限公司 Intelligent management and control system and method of manufacturing technique
CN104935879A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Multi-View Human Detection Using Semi-Exhaustive Search
CN106257510A (en) * 2016-03-25 2016-12-28 深圳增强现实技术有限公司 Operational data processing method based on Intelligent worn device and system
CN109477951A (en) * 2016-08-02 2019-03-15 阿特拉斯5D公司 People and/or identification and the system and method for quantifying pain, fatigue, mood and intention are identified while protecting privacy
CN108198221A (en) * 2018-01-23 2018-06-22 平顶山学院 A kind of automatic stage light tracking system and method based on limb action
CN108764333A (en) * 2018-05-28 2018-11-06 北京纵目安驰智能科技有限公司 One kind being based on the cascade semantic segmentation method of time series, system, terminal and storage medium
CN109166360A (en) * 2018-10-09 2019-01-08 丰羽教育科技(上海)有限公司 A kind of tutoring system and its method of safety operation equipment
CN109492602A (en) * 2018-11-21 2019-03-19 华侨大学 A kind of process clocking method and system based on human body limb language
CN110232379A (en) * 2019-06-03 2019-09-13 上海眼控科技股份有限公司 A kind of vehicle attitude detection method and system
CN110674712A (en) * 2019-09-11 2020-01-10 苏宁云计算有限公司 Interactive behavior recognition method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hu S. Jack 等.Effect of cycle to cycle task variations in mixed-model assembly lines on workers’ upper body and lower back exertions and recovery time: A simulated assembly study.Industrial Ergonomics.2017,第61卷88-100. *
汪嘉杰 等.基于视觉的航天电连接器的智能识别与装配引导.计算机集成制造系统.2017,第23卷(第23期),2423-2430. *
肖鸿.面向复杂产品装配现场的移动三维模型关键技术研究 .中国博士学位论文全文数据库 (工程科技Ⅰ辑) .2015,(第undefined期), B022-212. *

Also Published As

Publication number Publication date
CN111507231A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN109325538B (en) Object detection method, device and computer-readable storage medium
CN111507231B (en) Automatic detection method and system for correctness of process steps
CN110674712A (en) Interactive behavior recognition method and device, computer equipment and storage medium
CN110309073B (en) Method, system and terminal for automatically detecting user interface errors of mobile application program
CN109657431B (en) Method for identifying user identity
DE102016200553A1 (en) COMPUTER VISION BASED PROCESS RECOGNITION
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN103996239A (en) Bill positioning and recognizing method and system based on multi-cue fusion
CN111462381A (en) Access control method based on face temperature identification, electronic device and storage medium
CN111582169A (en) Image recognition data error correction method, device, computer equipment and storage medium
CN111368682A (en) Method and system for detecting and identifying station caption based on faster RCNN
CN113095292A (en) Gesture recognition method and device, electronic equipment and readable storage medium
CN111462188A (en) Camera movement detection method and system
CN110929555B (en) Face recognition method and electronic device using same
CN110288040B (en) Image similarity judging method and device based on topology verification
CN111507232A (en) Multi-mode multi-strategy fused stranger identification method and system
CN111325937B (en) Method, device and electronic system for detecting crossing behavior
CN113378764A (en) Video face acquisition method, device, equipment and medium based on clustering algorithm
CN116630809A (en) Geological radar data automatic identification method and system based on intelligent image analysis
Dangla et al. A first step toward a fair comparison of evaluation protocols for text detection algorithms
CN114092542A (en) Bolt measuring method and system based on two-dimensional vision
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium
CN110795705B (en) Track data processing method, device and equipment and storage medium
CN113160220A (en) Door handle homing and bending detection method based on deep learning
CN112836682A (en) Method and device for identifying object in video, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230105

Address after: 314506 room 116, building 4, No. 288, development avenue, Tongxiang Economic Development Zone, Tongxiang City, Jiaxing City, Zhejiang Province

Applicant after: Shengjing Intelligent Technology (Jiaxing) Co.,Ltd.

Address before: 102200 5th floor, building 6, No.8 Beiqing Road, Changping District, Beijing

Applicant before: SANY HEAVY INDUSTRY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant