CN113111844B - Operation posture evaluation method and device, local terminal and readable storage medium - Google Patents

Operation posture evaluation method and device, local terminal and readable storage medium Download PDF

Info

Publication number
CN113111844B
CN113111844B CN202110463521.0A CN202110463521A CN113111844B CN 113111844 B CN113111844 B CN 113111844B CN 202110463521 A CN202110463521 A CN 202110463521A CN 113111844 B CN113111844 B CN 113111844B
Authority
CN
China
Prior art keywords
target
target object
posture
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110463521.0A
Other languages
Chinese (zh)
Other versions
CN113111844A (en
Inventor
崔岩
常青玲
徐翊迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Wuyi University
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Wuyi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, Wuyi University filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202110463521.0A priority Critical patent/CN113111844B/en
Publication of CN113111844A publication Critical patent/CN113111844A/en
Application granted granted Critical
Publication of CN113111844B publication Critical patent/CN113111844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application is applicable to the technical field of image processing, and provides a method and a device for evaluating a job posture, a local terminal and a readable storage medium method, wherein the method comprises the following steps: acquiring an image to be identified; extracting a target image in an image to be identified, wherein the target image is an image representing pixel information of a target object; determining identity information of the target object according to the target image; calling a human body posture detection model corresponding to the identity information, and identifying the operation posture information of the target object; and evaluating the operation posture information according to the evaluation model to obtain an operation posture evaluation result. Therefore, the method and the device can comprehensively evaluate the operation posture information, and achieve the effect of improving the operation posture evaluation accuracy.

Description

Operation posture evaluation method and device, local terminal and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method and a device for evaluating a job posture, a local terminal and a readable storage medium.
Background
With the concept of industrial 4.0, the demand of industrial intelligence of a factory is higher and higher, and the traditional manual management mode is abandoned for the operation safety management of factory staff. In the prior art, the evaluation of the operation of plant staff is realized by combining the Internet of things with a sensor and the like, for example, the operation of the plant staff is evaluated by detecting the temperature change of equipment through a temperature sensor, but the mode is simple and the operation of the plant staff cannot be accurately evaluated.
Disclosure of Invention
The embodiment of the application provides a method and a device for evaluating operation posture, a local terminal and a readable storage medium, which can solve the problem that the operation of factory staff cannot be accurately evaluated in the prior art.
In a first aspect, an embodiment of the present application provides a method for evaluating a task posture, including:
acquiring an image to be identified;
extracting a target image in the image to be identified, wherein the target image is an image representing pixel information of a target object;
determining identity information of a target object according to the target image;
calling a human body posture detection model corresponding to the identity information, and identifying the operation posture information of the target object;
and evaluating the operation posture information according to the evaluation model to obtain an operation posture evaluation result.
In a possible implementation manner of the first aspect, extracting a target image from the image to be recognized includes:
extracting a candidate region in the image to be identified according to an image segmentation algorithm;
and aggregating the candidate region and the adjacent candidate region of the candidate region according to a similarity algorithm until a target candidate region is obtained, and taking a target image of the target candidate region in the image to be identified as an image corresponding to the target object.
In a possible implementation manner of the first aspect, determining identity information of a target object according to the target image includes:
determining a face candidate region in the target image by using an HOG feature description algorithm;
extracting feature points in the face candidate region;
coding the feature points to obtain feature vector values;
and inputting the characteristic vector value into a preset face matching library to obtain the identity information of the target object.
In one possible implementation manner of the first aspect, the identity information includes a job type of the target object, and the job type includes static job and dynamic job;
calling a human body posture detection model corresponding to the identity information, wherein the operation posture information for identifying the target object comprises the following steps:
when the operation type of the target object is static operation, inputting the target image into the static attitude detection model to obtain operation attitude information of the target object;
and when the operation type of the target object is dynamic operation, acquiring a target video of the target object, and inputting the target video into the dynamic attitude detection model to obtain operation attitude information of the target object.
In one possible implementation manner of the first aspect, the static detection model includes a first convolutional network structure, a second convolutional network structure, and a third convolutional network structure;
when the operation type of the target object is static operation, inputting the target image into the static posture detection model to obtain operation posture information of the target object:
when the operation type of the target object is static operation;
identifying a feature map of the target image;
calling the first convolution network structure to perform high-resolution first convolution operation on the feature map;
calling the second convolution network structure to perform second convolution operation with medium resolution on the feature map;
calling the third convolution network structure to perform low-resolution third convolution operation on the feature map;
and fusing the feature map after the first convolution operation, the feature map of the second convolution operation and the feature map of the third convolution operation to obtain operation attitude information.
In a possible implementation manner of the first aspect, the dynamic gesture detection model includes a target detection network, a local feature extraction network, a global feature extraction network, and a gesture classification network;
when the operation type of the target object is dynamic operation, acquiring a target video of the target object, inputting the target video to the dynamic posture detection model, and acquiring operation posture information of the target object, including:
determining the operation type of the target object as dynamic operation;
acquiring a target video of a target object;
preprocessing the target video to obtain a first target sequence frame;
dividing the first target sequence frame based on a target detection network to obtain a second target sequence frame;
inputting the second target sequence frame into a local feature extraction network to obtain local dynamic features;
inputting the first target sequence frame into a global feature extraction network to obtain global dynamic features;
and fusing the local dynamic features and the global dynamic features according to the attitude classification network, and classifying the fused local dynamic features and the fused global dynamic features to obtain the operation attitude information of the target object.
In a possible implementation manner of the first aspect, the operation posture information includes an nonstandard posture type and a number of occurrences corresponding to the nonstandard posture type;
evaluating the operation posture information according to an evaluation model to obtain an operation posture evaluation result, wherein the evaluation result comprises the following steps:
obtaining the operation posture evaluation result of the target object according to the following formula:
Score = F(s,n,m),
wherein, Score represents the evaluation Score of the operation posture, F represents a logistic regression function, s represents an evaluation period, n represents an nonstandard posture type, and m represents the occurrence frequency corresponding to the nonstandard posture type.
In a second aspect, an embodiment of the present application provides a work posture evaluating apparatus, including:
the acquisition module is used for acquiring an image to be identified;
the extraction module is used for extracting a target image in the image to be identified, wherein the target image is an image representing pixel information of a target object;
the determining module is used for determining the identity information of the target object according to the target image;
the calling module is used for calling a human body posture detection model corresponding to the identity information and identifying the operation posture information of the target object;
and the evaluation module is used for evaluating the operation posture information according to the evaluation model to obtain an operation posture evaluation result.
In one possible implementation, the extraction module includes:
extracting a target image in the image to be recognized, wherein the extracting comprises the following steps:
the extraction submodule is used for extracting a candidate region in the image to be identified according to an image segmentation algorithm;
and the aggregation module is used for aggregating the candidate region and the adjacent candidate region of the candidate region according to a similarity algorithm until a target candidate region is obtained, and taking a target image of the target candidate region in the image to be identified as an image corresponding to the target object.
In one possible implementation, the determining module includes:
the determining submodule is used for determining a face candidate region in the target image by using an HOG feature description algorithm;
the extraction submodule is used for extracting the characteristic points in the face candidate area;
the processing submodule is used for coding the characteristic points to obtain a characteristic vector value;
and the matching submodule is used for inputting the characteristic vector value into a preset face matching library to obtain the identity information of the target object.
In one possible implementation mode, the identity information comprises the operation type of the target object, and the operation type comprises static operation and dynamic operation;
the calling module comprises:
the first recognition submodule is used for inputting the target image into the static posture detection model to obtain the operation posture information of the target object when the operation type of the target object is static operation;
and the second identification submodule is used for acquiring a target video of the target object and inputting the target video to the dynamic attitude detection model to obtain the operation attitude information of the target object when the operation type of the target object is dynamic operation.
In one possible implementation, the static detection model includes a first convolutional network structure, a second convolutional network structure, and a third convolutional network structure;
the first identification submodule includes:
a first determination unit, configured to determine that the job type of the target object is a static job;
the identification unit is used for identifying the feature map of the target image;
the first calling unit is used for calling the first convolution network structure to perform high-resolution first convolution operation on the feature map;
the second calling unit is used for calling the second convolution network structure to perform second convolution operation with medium resolution on the feature graph;
the third calling unit is used for calling the third convolution network structure to carry out low-resolution third convolution operation on the feature map;
and the fusion unit is used for fusing the feature map after the first convolution operation, the feature map of the second convolution operation and the feature map of the third convolution operation to obtain operation attitude information.
In one possible implementation, the dynamic gesture detection model includes a target detection network, a local feature extraction network, a global feature extraction network, and a gesture classification network;
the second identification submodule includes:
the second determining subunit is used for determining that the job type of the target object is a dynamic job;
an acquisition unit configured to acquire a target video of the target object;
the preprocessing unit is used for preprocessing the target video to obtain a first target sequence frame;
the segmentation unit is used for segmenting the first target sequence frame based on a target detection network to obtain a second target sequence frame;
the local extraction unit is used for inputting the second target sequence frame into a local feature extraction network to obtain local dynamic features;
the global extraction unit is used for inputting the first target sequence frame into a global feature extraction network to obtain global dynamic features;
and the fusion processing unit is used for fusing the local dynamic features and the global dynamic features according to the attitude classification network, and classifying the fused local dynamic features and the fused global dynamic features to obtain the operation attitude information of the target object.
In one possible implementation, the operation posture information includes an abnormal posture type and a number of occurrences corresponding to the abnormal posture type;
the evaluation module comprises:
the evaluation submodule is used for obtaining an evaluation result of the operation posture of the target object according to the following formula:
Score = F(s,n,m),
wherein, Score represents the evaluation Score of the operation posture, F represents a logistic regression function, s represents an evaluation period, n represents an nonstandard posture type, and m represents the occurrence frequency corresponding to the nonstandard posture type.
In a third aspect, an embodiment of the present application provides a local terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method according to any one of the above first aspects
In a fourth aspect, the present application provides a readable storage medium, which stores a computer program, where the computer program is implemented to implement the method according to the first aspect when executed by a processor.
Compared with the prior art, the embodiment of the application has the advantages that:
in the implementation of the application, the target image of the pixel information representing the target object is extracted, the identity information of the target object is determined according to the target image, the corresponding human body posture detection model is called, the operation posture information of the target object is identified, the operation posture information is comprehensively evaluated, and the effect of improving the operation posture evaluation accuracy is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a work posture evaluation system provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario of a job posture estimation system provided in an embodiment of the application;
FIG. 3 is a flowchart illustrating a task posture estimation method according to an embodiment of the present disclosure;
fig. 4 is a detailed flowchart of step S302 in fig. 3 of the job posture estimation method according to the embodiment of the present application;
fig. 5 is a schematic diagram related to step S402 in fig. 4 of a job posture estimation method provided in an embodiment of the present application;
fig. 6 is a detailed flowchart of step S303 in fig. 3 of the task posture estimation method according to the embodiment of the present application;
fig. 7 is a sitting posture diagram of a task posture evaluation method provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a standing posture of a task posture estimation method provided by an embodiment of the present application;
fig. 9 is a schematic conveying posture diagram of a work posture estimation method according to an embodiment of the present application;
fig. 10 is a detailed flowchart of step S304 in fig. 3 of a job posture estimation method according to an embodiment of the present application;
fig. 11 is a detailed flowchart of step S101 in fig. 10 of a job posture estimation method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a static attitude detection module of a job attitude assessment method according to an embodiment of the present application;
fig. 13 is a detailed flowchart of step S102 in fig. 10 of a task posture estimation method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a dynamic posture detection module of a job posture evaluation method provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a work posture estimation device provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a local terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The technical solutions provided in the embodiments of the present application will be described below with specific embodiments.
Referring to fig. 1, a schematic structural diagram of a work posture evaluation system provided in an embodiment of the present application includes a camera, and a local terminal connected to the camera. The camera is a depth camera, and specifically may be a Kinect depth camera, and the Kinect depth camera may capture a color image (the format of the color image includes an RGB format) and a depth image. In addition, the local terminal may be a terminal device or a server, the terminal device may be a computing device such as a desktop computer, a notebook, a palm computer, and the like, and the server may be a computing device such as a cloud server, and the embodiment of the present application does not limit the specific type of the local terminal.
As shown in fig. 2, a schematic view of an application scenario of the job posture estimation system provided in the embodiment of the present application includes cameras with unlimited number, which are disposed indoors, and are used for shooting an indoor object for different time periods, and then uploading the shot image to a local terminal for processing. For example, the application scenario of the embodiment of the present application may specifically be: the camera arranged in the factory shoots the staff in the factory for different time periods, and then the shot images are uploaded to the local terminal for processing, so that the operation postures of the staff in the factory are recognized, and the operation postures are evaluated.
Referring to fig. 3, a flow chart of a job posture estimation method provided in the embodiment of the present application is schematically illustrated, by way of example and not limitation, the method may be applied to the local terminal, and the method may include the following steps:
and S301, acquiring an image to be identified.
It is understood that the image to be recognized is an image taken by a camera provided in a factory. It should be noted that, in the embodiment of the present application, the image to be recognized is acquired periodically, and in addition, after it is determined that the work posture of the target object is a dynamic work, the target video of the target object is acquired again.
And S302, extracting a target image in the image to be recognized.
The target image is an image representing pixel information of a target object.
It can be understood that the position of the human body target in the image is uncertain, and the indoor environment is complex, and the position of the human body target in the image needs to be determined, that is, the target image is extracted, and then the processing can be performed according to the target image.
In a specific application, as shown in fig. 4, a specific flowchart of step S302 in fig. 3 for providing the job posture evaluation method according to the embodiment of the present application is extracted, where the extracting a target image from an image to be recognized includes:
step S401, extracting a candidate region in the image to be identified according to an image segmentation algorithm.
The image segmentation algorithm may be based on a candidate region selection algorithm (Regional general). Illustratively, the candidate region extracted from the image to be recognized according to the image segmentation algorithm may be a candidate region in a white rectangular frame in the image as shown in fig. 5 (a).
Step S402, aggregating the candidate region and the adjacent candidate region of the candidate region according to a similarity algorithm until a target candidate region is obtained, and taking a target image of the target candidate region in the image to be identified as an image corresponding to the target object.
Wherein the similarity algorithm may be a geometric similarity algorithm. Illustratively, the similarity between all the neighboring candidate regions in fig. 5 (a) is first calculated, the two most similar candidate regions are grouped together, and new similarities are calculated between the resulting candidate regions and their neighboring candidate regions, and the process of aggregating the most similar candidate regions is repeated until the entire image becomes a single candidate region, and the image corresponding to the single candidate region is taken as the target image, that is, the image of the candidate region in the white rectangular frame in fig. 5 (b).
It can be understood that, in the embodiment of the application, the candidate regions in the image to be recognized are extracted by using an image segmentation algorithm, and then iterative aggregation is performed according to the similarity between adjacent candidate regions, so that the target image in the image to be recognized is obtained, and the recognition accuracy of the target image is improved.
And step S303, determining the identity information of the target object according to the target image.
The identity information comprises the job number, the working time, the post type and the job type of the target object.
Referring to fig. 6, a specific flowchart of step S303 in fig. 3 of the job posture estimation method provided in the embodiment of the present application is shown, where determining the identity information of the target object according to the target image includes:
step S601, determining a face candidate region in the target image by using an HOG feature description algorithm.
And step S602, extracting feature points in the face candidate area.
And step S603, encoding the feature points to obtain feature vector values.
The encoding process may be template eigenvector encoding.
Step S604, inputting the characteristic vector value into a preset human face matching library to obtain the identity information of the target object.
The preset face matching library stores verification feature codes, and each verification feature code is marked with corresponding identity information.
It can be understood that, in the embodiment of the application, the identity information of the target object is obtained by calculating the feature vector corresponding to the target image and then inputting the feature vector into the preset face matching library, so that a basis is provided for subsequent operation posture identification of the target object.
And S304, calling a human body posture detection model corresponding to the identity information, and identifying the operation posture information of the target object.
It should be noted that the human body posture detection model is obtained by training in advance according to the sample image, and the nonstandard posture type of the operation posture of the target object and the times corresponding to the nonstandard posture type can be identified.
The operation posture information includes an nonstandard posture type of the target object and a number of times corresponding to the nonstandard posture type, for example, the nonstandard posture type includes a working posture nonstandard posture and an operation posture nonstandard posture, for example, a sitting posture or a standing posture of a factory worker for static operation is nonstandard (for example, a danger occurrence probability is increased when the distance between the head and the hand is too close), and an operation posture is nonstandard (for example, an operation sequence does not cause damage to a machine or a dangerous area on the machine is touched during operation); the human body posture detection model comprises a static posture detection model and a dynamic posture detection model, the static posture detection module is used for identifying the human body posture of the target object of the static operation, and the dynamic posture detection model is used for identifying the human body posture of the target object of the dynamic operation.
It should be noted that, in an actual application scenario, operations of plant staff are divided into static operations and dynamic operations, more specifically, the static operations include seating operations (for example, seating and standing assembly operations of a pipeline shown in fig. 7) and standing operations (for example, standing assembly operations of a pipeline shown in fig. 8), the dynamic operations may be carrying operations (including a picking up an object, b carrying an object, and c mounting an object) shown in fig. 9, and the static operations and the dynamic operations are simultaneously identified by using a neural network detection model in the prior art, that is, the static body posture and the dynamic body posture cannot be processed compatibly, so in the embodiment of the present application, the operation type of a target object is determined first, and then a posture detection model corresponding to the operation type is called for identification, so as to achieve an effect of improving accuracy of model identification.
Referring to fig. 10, for a specific flowchart illustration in step S304 in fig. 3 of the operation posture assessment method provided in the embodiment of the present application, a human posture detection model corresponding to identity information is called, and identifying operation posture information of a target object includes:
and S101, when the work type of the target object is static work, inputting the target image into a static posture detection model to obtain the work posture information of the target object.
The static detection model comprises a first convolution network structure, a second convolution network structure and a third convolution network structure, and image features are processed among the first convolution network structure, the second convolution network structure and the third convolution network structure according to a preset sequence.
In a specific application, as shown in fig. 11, a specific flowchart of step S101 in fig. 10 of the method for evaluating a job posture provided in the embodiment of the present application is shown, when a job type of a target object is a static job, inputting a target image into a static posture detection model to obtain job posture information of the target object, where the method includes:
and step S111, determining the job type of the target object as a static job.
And step S112, identifying a characteristic diagram of the target image.
The identification process of the feature map may be a contour feature extraction method.
And step S113, calling a first convolution network structure to perform high-resolution first convolution operation on the feature map.
And step S114, calling a second convolution network structure to perform second convolution operation with medium resolution on the feature graph.
And step S115, calling a third convolution network structure to perform low-resolution third convolution operation on the feature map.
And step S116, fusing the feature map after the first convolution operation, the feature map after the second convolution operation and the feature map after the third convolution operation to obtain operation attitude information.
The merging refers to up-and-down sampling calculation among the first convolution network structure, the second convolution network structure, and the third convolution network structure in fig. 11.
It can be understood that, due to the influence of factory environment, light and the like, in order to improve the resolution of the recognition image without reducing, a parallel structure of a multilayer convolution network structure is adopted to perform convolution operation and sampling calculation on the feature map of the target object, so as to obtain the operation posture of the target object.
In specific application, as shown in fig. 12, for a schematic structural diagram of the static posture detection model provided in the embodiment of the present application, an application scenario from step S113 to step S116 is introduced according to fig. 12: the first convolution network structure carries out high-resolution first convolution operation on the feature graph, the second convolution network structure carries out medium-resolution second convolution operation on the feature graph after convolution operation of 2 first convolution network structures, wherein up-and-down sampling operation is carried out among proper convolution network structures, the third convolution network structure carries out low-resolution convolution operation on the feature graph after convolution operation of 5 first convolution network structures and the feature graph after convolution operation of 4 first convolution network structures at the same time, and finally up-and-down sampling operation among the first convolution network structure, the second convolution network structure and the third convolution network structure is carried out, and operation posture information of the target object is output. Illustratively, the work posture of the target object may be a sitting posture as shown in fig. 7 or an upright posture as shown in fig. 8, and then the corresponding work posture information may be that the nonstandard type of the plant employee sitting on the work is the work posture nonstandard, and the number of times the work posture is nonstandard is 2.
In the embodiment of the application, the static attitude detection model adopts a multi-layer convolution parallel structure to carry out convolution operation and sampling calculation on the characteristic diagram of the target object, so that the accuracy of image processing is improved.
And S102, when the operation type of the target object is dynamic operation, acquiring a target video of the target object, and inputting the target video into the dynamic attitude detection model to obtain operation attitude information of the target object.
The dynamic attitude detection model comprises a target detection network, a local feature extraction network, a global feature extraction network and an attitude classification network.
It can be understood that continuous video data needs to be processed to identify the human body posture of the target object during dynamic operation, and for this reason, the embodiment of the present application adopts the dynamic posture detection module to respectively identify the local features and the global features of the target video, and then classifies the local features and the global features to obtain the operation posture of the target object, so as to improve the operation posture identification precision of the target object when the target object is in dynamic operation. In a specific application, as shown in fig. 13, a specific flowchart of step S102 in fig. 10 of the job posture estimation method provided in the embodiment of the present application is shown, when the job type of the target object is a dynamic job, acquiring a target video of the target object, and inputting the target video into the dynamic posture detection model to obtain job posture information of the target object, where the method includes:
step S131, determining that the job type of the target object is a dynamic job.
And step S132, acquiring a target video of the target object.
Specifically, a camera is invoked to capture a sequence of images of a target object over a continuous period of time, i.e., a target video.
And step S133, preprocessing the target video to obtain a first target sequence frame.
Wherein the first target sequence is the input frame sequence in fig. 13.
And S134, segmenting the first target sequence frame based on the target detection network to obtain a second target sequence frame.
Wherein, the second target sequence frame is the human target sequence in fig. 13.
And step S135, inputting the second target sequence frame into a local feature extraction network to obtain local dynamic features.
And S136, inputting the first target sequence frame into a global feature extraction network to obtain global dynamic features.
And S137, fusing the local dynamic features and the global dynamic features according to the attitude classification network, and classifying the fused local dynamic features and global dynamic features to obtain the operation attitude information of the target object.
For convenience of understanding, the specific application process of the above steps S133 to S137 may refer to a schematic structural diagram of the dynamic posture detection model shown in fig. 14. For example, the work posture of the target object may be a conveyance posture as shown in fig. 9, and correspondingly, the work posture information may be that the nonstandard posture type of the plant personnel performing the dynamic work is an operation posture nonstandard (for example, one-handed conveyance during conveyance of the article), and the number of times of the nonstandard posture type is 3.
In the embodiment of the application, the dynamic gesture detection module is adopted to respectively identify the local features and the global features of the target video, then the operation gesture of the target object is obtained through classification, and the aim of improving the operation gesture identification precision of the target object when the target object is in dynamic operation is achieved.
And S305, evaluating the operation posture information according to the evaluation model to obtain an operation posture evaluation result.
In specific application, the operation posture evaluation result of the target object is obtained according to the following formula:
Score = F(s,n,m),
wherein, Score represents the evaluation Score of the operation posture, F represents a logistic regression function, s represents an evaluation period, n represents an nonstandard posture type, and m represents the occurrence frequency corresponding to the nonstandard posture type. The logistic regression function may be any function that characterizes the logistic regression concept, and is not specifically listed here.
It can be understood that, in the embodiment of the application, the target object is comprehensively evaluated based on the evaluation period, the nonstandard posture type and the occurrence frequency corresponding to the nonstandard posture type and based on the logistic regression function, so that the accuracy of the evaluation of the operation posture is improved.
In the implementation of the application, the target image of the pixel information representing the target object is extracted, the identity information of the target object is determined according to the target image, the corresponding human body posture detection model is called, the operation posture information of the target object is identified, the operation posture information is comprehensively evaluated, and the effect of improving the operation posture evaluation accuracy is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 14 shows a block diagram of the configuration of the job posture estimation apparatus provided in the embodiment of the present application, corresponding to the job posture estimation method described in the above embodiment, and only the part related to the embodiment of the present application is shown for convenience of explanation.
Referring to fig. 15, the apparatus includes:
an obtaining module 151, configured to obtain an image to be identified;
an extracting module 152, configured to extract a target image from the image to be identified, where the target image is an image representing pixel information of a target object;
a determining module 153, configured to determine identity information of the target object according to the target image;
a calling module 154, configured to call a human body posture detection model corresponding to the identity information, and identify operation posture information of the target object;
and the evaluation module 155 is configured to evaluate the job posture information according to the evaluation model to obtain a job posture evaluation result.
In one possible implementation, the extraction module includes:
extracting a target image in the image to be recognized, wherein the extracting comprises the following steps:
the extraction submodule is used for extracting a candidate region in the image to be identified according to an image segmentation algorithm;
and the aggregation module is used for aggregating the candidate region and the adjacent candidate region of the candidate region according to a similarity algorithm until a target candidate region is obtained, and taking a target image of the target candidate region in the image to be identified as an image corresponding to the target object.
In one possible implementation, the determining module includes:
the determining submodule is used for determining a face candidate region in the target image by using an HOG feature description algorithm;
the extraction submodule is used for extracting the characteristic points in the face candidate area;
the processing submodule is used for coding the characteristic points to obtain a characteristic vector value;
and the matching submodule is used for inputting the characteristic vector value into a preset face matching library to obtain the identity information of the target object.
In one possible implementation mode, the identity information comprises the operation type of the target object, and the operation type comprises static operation and dynamic operation;
the calling module comprises:
the first recognition submodule is used for inputting the target image into the static posture detection model to obtain the operation posture information of the target object when the operation type of the target object is static operation;
and the second identification submodule is used for acquiring a target video of the target object and inputting the target video to the dynamic attitude detection model to obtain the operation attitude information of the target object when the operation type of the target object is dynamic operation.
In one possible implementation, the static detection model includes a first convolutional network structure, a second convolutional network structure, and a third convolutional network structure;
the first identification submodule includes:
a first determination unit, configured to determine that the job type of the target object is a static job;
the identification unit is used for identifying the feature map of the target image;
the first calling unit is used for calling the first convolution network structure to perform high-resolution first convolution operation on the feature map;
the second calling unit is used for calling the second convolution network structure to perform second convolution operation with medium resolution on the feature graph;
the third calling unit is used for calling the third convolution network structure to carry out low-resolution third convolution operation on the feature map;
and the fusion unit is used for fusing the feature map after the first convolution operation, the feature map of the second convolution operation and the feature map of the third convolution operation to obtain operation attitude information.
In one possible implementation, the dynamic gesture detection model includes a target detection network, a local feature extraction network, a global feature extraction network, and a gesture classification network;
the second identification submodule includes:
the second determining subunit is used for determining that the job type of the target object is a dynamic job;
an acquisition unit configured to acquire a target video of the target object;
the preprocessing unit is used for preprocessing the target video to obtain a first target sequence frame;
the segmentation unit is used for segmenting the first target sequence frame based on a target detection network to obtain a second target sequence frame;
the local extraction unit is used for inputting the second target sequence frame into a local feature extraction network to obtain local dynamic features;
the global extraction unit is used for inputting the first target sequence frame into a global feature extraction network to obtain global dynamic features;
and the fusion processing unit is used for fusing the local dynamic features and the global dynamic features according to the attitude classification network, and classifying the fused local dynamic features and the fused global dynamic features to obtain the operation attitude information of the target object.
In one possible implementation, the operation posture information includes an abnormal posture type and a number of occurrences corresponding to the abnormal posture type;
the evaluation module comprises:
the evaluation submodule is used for obtaining an evaluation result of the operation posture of the target object according to the following formula:
Score = F(s,n,m),
wherein, Score represents the evaluation Score of the operation posture, F represents a logistic regression function, s represents an evaluation period, n represents an nonstandard posture type, and m represents the occurrence frequency corresponding to the nonstandard posture type.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 16 is a schematic structural diagram of a local terminal according to an embodiment of the present application. As shown in fig. 16, the local terminal 16 of this embodiment includes: at least one processor 160, a memory 161, and a computer program 162 stored in the memory 161 and executable on the at least one processor 160, the processor 160 implementing the steps in any of the various method embodiments described above when executing the computer program 162.
The local terminal 16 may be a computing device such as a desktop computer, a notebook, a palm top computer, and a cloud server. The local terminal may include, but is not limited to, a processor 160, a memory 161. Those skilled in the art will appreciate that fig. 16 is merely an example of the local terminal 16 and does not constitute a limitation of the local terminal 16, and may include more or less components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
The Processor 160 may be a Central Processing Unit (CPU), and the Processor 160 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 161 may be an internal storage unit of the local terminal 16 in some embodiments, for example, a hard disk or a memory of the local terminal 16. The memory 161 may also be an external storage device of the local terminal 16 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the local terminal 16. Further, the memory 161 may also include both an internal storage unit of the local terminal 16 and an external storage device. The memory 161 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 161 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application further provides a readable storage medium, which may be specifically a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a local terminal, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A method for evaluating a working attitude, the method comprising: acquiring an image to be identified; extracting a target image in the image to be identified, wherein the target image is an image representing pixel information of a target object; determining identity information of a target object according to the target image; calling a human body posture detection model corresponding to the identity information, and identifying the operation posture information of the target object; evaluating the operation posture information according to an evaluation model to obtain an operation posture evaluation result; the identity information comprises a job type of a target object, wherein the job type comprises a static job and a dynamic job; the human body posture detection model comprises a static posture detection model and a dynamic posture detection model; calling a human body posture detection model corresponding to the identity information, wherein the operation posture information for identifying the target object comprises the following steps: when the operation type of the target object is static operation, inputting the target image into the static attitude detection model to obtain operation attitude information of the target object; and when the operation type of the target object is dynamic operation, acquiring a target video of the target object, and inputting the target video into the dynamic attitude detection model to obtain operation attitude information of the target object.
2. The job posture estimation method according to claim 1, wherein extracting a target image from the image to be recognized includes: extracting a candidate region in the image to be identified according to an image segmentation algorithm; and aggregating the candidate region and the adjacent candidate region of the candidate region according to a similarity algorithm until a target candidate region is obtained, and taking a target image of the target candidate region in the image to be identified as an image corresponding to the target object.
3. The job pose estimation method according to claim 2, wherein determining identity information of a target object from the target image comprises: determining a face candidate region in the target image by using an HOG feature description algorithm; extracting feature points in the face candidate region; coding the feature points to obtain feature vector values; and inputting the characteristic vector value into a preset face matching library to obtain the identity information of the target object.
4. The job pose estimation method of claim 1, wherein the static pose detection model comprises a first convolutional network structure, a second convolutional network structure, and a third convolutional network structure; when the operation type of the target object is static operation, inputting the target image into the static posture detection model to obtain operation posture information of the target object: when the operation type of the target object is static operation; identifying a feature map of the target image; calling the first convolution network structure to perform high-resolution first convolution operation on the feature map; calling the second convolution network structure to perform second convolution operation with medium resolution on the feature map; calling the third convolution network structure to perform low-resolution third convolution operation on the feature map; and fusing the feature map after the first convolution operation, the feature map after the second convolution operation and the feature map after the third convolution operation to obtain operation attitude information.
5. The job pose estimation method according to claim 1, wherein the dynamic pose detection model comprises a target detection network, a local feature extraction network, a global feature extraction network, a pose classification network; when the operation type of the target object is dynamic operation, acquiring a target video of the target object, inputting the target video to the dynamic posture detection model, and acquiring operation posture information of the target object, including: determining the operation type of the target object as dynamic operation; acquiring a target video of a target object; preprocessing the target video to obtain a first target sequence frame; dividing the first target sequence frame based on a target detection network to obtain a second target sequence frame; inputting the second target sequence frame into a local feature extraction network to obtain local dynamic features; inputting the first target sequence frame into a global feature extraction network to obtain global dynamic features; and fusing the local dynamic features and the global dynamic features according to the attitude classification network, and classifying the fused local dynamic features and the fused global dynamic features to obtain the operation attitude information of the target object.
6. The work posture assessment method of claim 1, the work posture information comprising an nonstandard posture type and a number of occurrences corresponding to the nonstandard posture type; evaluating the operation posture information according to an evaluation model to obtain an operation posture evaluation result, wherein the evaluation result comprises the following steps: obtaining the operation posture evaluation result of the target object according to the following formula: score = F (s, n, m), where Score represents the job pose evaluation Score, F represents the logistic regression function, s represents the evaluation period, n represents the nonstandard pose type, and m represents the number of occurrences corresponding to the nonstandard pose type.
7. A work posture evaluating apparatus, characterized by comprising: the acquisition module is used for acquiring an image to be identified; the extraction module is used for extracting a target image in the image to be identified, wherein the target image is an image representing pixel information of a target object; the determining module is used for determining the identity information of the target object according to the target image; the calling module is used for calling a human body posture detection model corresponding to the identity information and identifying the operation posture information of the target object; the evaluation module is used for evaluating the operation posture information according to an evaluation model to obtain an operation posture evaluation result; the identity information comprises a job type of a target object, wherein the job type comprises a static job and a dynamic job; the human body posture detection model comprises a static posture detection model and a dynamic posture detection model; the calling module comprises: the first recognition submodule is used for inputting the target image into the static posture detection model to obtain the operation posture information of the target object when the operation type of the target object is static operation; and the second identification submodule is used for acquiring a target video of the target object and inputting the target video to the dynamic attitude detection model to obtain the operation attitude information of the target object when the operation type of the target object is dynamic operation.
8. A local terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
9. A readable storage medium, storing a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1 to 6.
CN202110463521.0A 2021-04-28 2021-04-28 Operation posture evaluation method and device, local terminal and readable storage medium Active CN113111844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463521.0A CN113111844B (en) 2021-04-28 2021-04-28 Operation posture evaluation method and device, local terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463521.0A CN113111844B (en) 2021-04-28 2021-04-28 Operation posture evaluation method and device, local terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN113111844A CN113111844A (en) 2021-07-13
CN113111844B true CN113111844B (en) 2022-02-15

Family

ID=76721844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463521.0A Active CN113111844B (en) 2021-04-28 2021-04-28 Operation posture evaluation method and device, local terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN113111844B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222192A (en) * 2022-05-20 2022-10-21 西安电子科技大学广州研究院 Method for applying multi-mode machine learning to automatic production line balance
US11836825B1 (en) * 2022-05-23 2023-12-05 Dell Products L.P. System and method for detecting postures of a user of an information handling system (IHS) during extreme lighting conditions
CN114999648B (en) * 2022-05-27 2023-03-24 浙江大学医学院附属儿童医院 Early screening system, equipment and storage medium for cerebral palsy based on baby dynamic posture estimation
CN115063740A (en) * 2022-06-10 2022-09-16 嘉洋智慧安全生产科技发展(北京)有限公司 Safety monitoring method, device, equipment and computer readable storage medium
CN117011945B (en) * 2023-10-07 2024-03-19 之江实验室 Action capability assessment method, action capability assessment device, computer equipment and readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016167268A (en) * 2015-03-06 2016-09-15 国立大学法人 筑波大学 Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system
CN107545415A (en) * 2017-10-01 2018-01-05 上海量科电子科技有限公司 Payment evaluation method, client and system based on action
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110490034A (en) * 2018-05-14 2019-11-22 欧姆龙株式会社 Motion analysis device, action-analysing method, recording medium and motion analysis system
CN110489849A (en) * 2019-08-13 2019-11-22 沈阳风驰软件股份有限公司 The simulation management method, apparatus and equipment of railcar business sending and receiving vehicle business
CN110738163A (en) * 2019-10-12 2020-01-31 中国矿业大学 mine personnel illegal action recognition system
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
CN111582078A (en) * 2020-04-23 2020-08-25 广州微盾科技股份有限公司 Operation method based on biological information and gesture, terminal device and storage medium
CN111753764A (en) * 2020-06-29 2020-10-09 济南浪潮高新科技投资发展有限公司 Gesture recognition method of edge terminal based on attitude estimation
CN111814587A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Human behavior detection method, teacher behavior detection method, and related system and device
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN112084898A (en) * 2020-08-25 2020-12-15 西安理工大学 Assembling operation action recognition method based on static and dynamic separation
CN112101802A (en) * 2020-09-21 2020-12-18 广东电网有限责任公司电力科学研究院 Attitude load data evaluation method and device, electronic equipment and storage medium
CN112309025A (en) * 2020-10-30 2021-02-02 北京市商汤科技开发有限公司 Information display method and device, electronic equipment and storage medium
CN112580535A (en) * 2020-12-23 2021-03-30 恒大新能源汽车投资控股集团有限公司 Vehicle danger warning method and device and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7083189B2 (en) * 2018-03-29 2022-06-10 国立大学法人 奈良先端科学技術大学院大学 Training data set creation method and equipment
CN111046840B (en) * 2019-12-26 2023-06-23 天津理工大学 Personnel safety monitoring method and system based on artificial intelligence in pollution remediation environment
CN112380735A (en) * 2020-12-12 2021-02-19 江西洪都航空工业股份有限公司 Cabin engineering virtual assessment device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016167268A (en) * 2015-03-06 2016-09-15 国立大学法人 筑波大学 Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system
CN107545415A (en) * 2017-10-01 2018-01-05 上海量科电子科技有限公司 Payment evaluation method, client and system based on action
CN110490034A (en) * 2018-05-14 2019-11-22 欧姆龙株式会社 Motion analysis device, action-analysing method, recording medium and motion analysis system
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110489849A (en) * 2019-08-13 2019-11-22 沈阳风驰软件股份有限公司 The simulation management method, apparatus and equipment of railcar business sending and receiving vehicle business
CN110738163A (en) * 2019-10-12 2020-01-31 中国矿业大学 mine personnel illegal action recognition system
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
CN111582078A (en) * 2020-04-23 2020-08-25 广州微盾科技股份有限公司 Operation method based on biological information and gesture, terminal device and storage medium
CN111814587A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Human behavior detection method, teacher behavior detection method, and related system and device
CN111753764A (en) * 2020-06-29 2020-10-09 济南浪潮高新科技投资发展有限公司 Gesture recognition method of edge terminal based on attitude estimation
CN112084898A (en) * 2020-08-25 2020-12-15 西安理工大学 Assembling operation action recognition method based on static and dynamic separation
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN112101802A (en) * 2020-09-21 2020-12-18 广东电网有限责任公司电力科学研究院 Attitude load data evaluation method and device, electronic equipment and storage medium
CN112309025A (en) * 2020-10-30 2021-02-02 北京市商汤科技开发有限公司 Information display method and device, electronic equipment and storage medium
CN112580535A (en) * 2020-12-23 2021-03-30 恒大新能源汽车投资控股集团有限公司 Vehicle danger warning method and device and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARKER-LESS BASED DETECTION OF REPETITIVE AWKWARD POSTURES FOR CONSTRUCTION WORKERS;Ren-Jye Dzeng等;《The 2018 International Academic Research Conference in Vienna》;20181231;第75-83页 *
基于DNN的作业姿势评估方法及应用;熊若鑫等;《中国安全科学学报》;20180531;第28卷(第5期);第105-110页 *
基于图像识别的建筑工人智能安全检查系统设计与实现;韩豫等;《中国安全生产科学技术》;20161031;第12卷(第10期);第143页 *
汽车装配工人肌肉骨骼疾患危害程度评价研究;王泳朝;《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》;20150515(第05期);第E055-11页 *

Also Published As

Publication number Publication date
CN113111844A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
US10558844B2 (en) Lightweight 3D vision camera with intelligent segmentation engine for machine vision and auto identification
CN105701476A (en) Machine vision-based automatic identification system and method for production line products
WO2022170844A1 (en) Video annotation method, apparatus and device, and computer readable storage medium
KR102649930B1 (en) Systems and methods for finding and classifying patterns in images with a vision system
CN108573471B (en) Image processing apparatus, image processing method, and recording medium
CN110533654A (en) The method for detecting abnormality and device of components
US9836673B2 (en) System, method and computer program product for training a three dimensional object indentification system and identifying three dimensional objects using semantic segments
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN111191582A (en) Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
CN114863464B (en) Second-order identification method for PID drawing picture information
CN109902550A (en) The recognition methods of pedestrian's attribute and device
TW202201275A (en) Device and method for scoring hand work motion and storage medium
CN111461143A (en) Picture copying identification method and device and electronic equipment
CN111199198A (en) Image target positioning method, image target positioning device and mobile robot
CN113557546B (en) Method, device, equipment and storage medium for detecting associated objects in image
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN117218633A (en) Article detection method, device, equipment and storage medium
CN114092385A (en) Industrial machine fault detection method and device based on machine vision
CN111275693B (en) Counting method and counting device for objects in image and readable storage medium
CN112084874B (en) Object detection method and device and terminal equipment
CN114494355A (en) Trajectory analysis method and device based on artificial intelligence, terminal equipment and medium
WO2015136716A1 (en) Image processing device, image sensor, and image processing method
JP7035357B2 (en) Computer program for image judgment, image judgment device and image judgment method
US20220284700A1 (en) Task appropriateness determination apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant