CN111832526B - Behavior detection method and device - Google Patents

Behavior detection method and device Download PDF

Info

Publication number
CN111832526B
CN111832526B CN202010717764.8A CN202010717764A CN111832526B CN 111832526 B CN111832526 B CN 111832526B CN 202010717764 A CN202010717764 A CN 202010717764A CN 111832526 B CN111832526 B CN 111832526B
Authority
CN
China
Prior art keywords
key points
monitored person
feature
image
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010717764.8A
Other languages
Chinese (zh)
Other versions
CN111832526A (en
Inventor
吴嵩波
杨明明
汪月林
姚炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Original Assignee
Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd filed Critical Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Priority to CN202010717764.8A priority Critical patent/CN111832526B/en
Publication of CN111832526A publication Critical patent/CN111832526A/en
Application granted granted Critical
Publication of CN111832526B publication Critical patent/CN111832526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Social Psychology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a behavior detection method and a behavior detection device, which are used for acquiring an image of a monitoring area acquired by a camera device; extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs is detected, the smoking behavior of the monitored person in the monitoring area is detected through the characteristic data analysis corresponding to the image of the monitoring area, the smoking behavior is automatically monitored through the image analysis, so that the omission probability is reduced, and all-weather detection can be realized.

Description

Behavior detection method and device
Technical Field
The application belongs to the technical field of data processing, and particularly relates to a behavior detection method and device.
Background
At present, a plurality of people have smoking behaviors in the production process, and people often operate a machine by one hand and hold cigarettes by the other hand when smoking, so that the body is not inclined by one hand, the gravity center is deviated, the force is uneven, and the action is irregular or deformed easily; and smoking is easy to cause stickiness of the oral cavity and itching of the throat, or cough, and even lower head and bending down when serious, so that smoking in the process inevitably affects the accuracy of personnel operation, and the production safety is endangered.
Therefore, in the production process, the smoking behaviors of the personnel need to be monitored, at present, the smoking behaviors of the personnel can be visually judged or manually patrol to be checked by security personnel through a camera, but the smoking behaviors of the security personnel are manually monitored, omission is easy to occur, and all-weather detection cannot be achieved.
Disclosure of Invention
Accordingly, the present application is directed to a behavior detection method and apparatus, which are used for detecting the smoking behavior of a monitored person in a monitoring area by analyzing feature data corresponding to an image of the monitoring area, and automatically monitoring the smoking behavior, so as to reduce the probability of omission and enable all-weather detection.
In one aspect, the present application provides a behavior detection method, the method comprising:
acquiring an image of a monitoring area acquired by a camera device;
extracting key point features from the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong;
Obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
And detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
Optionally, the obtaining the connection graph of the key points belonging to the same monitored person according to the first feature data and the second feature data includes:
Determining key points belonging to the same monitored person body part according to the first characteristic data and the second characteristic data;
And connecting the key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
Optionally, the determining, according to the first feature data and the second feature data, the key points of the body parts belonging to the same monitored person includes:
determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data;
calculating vector average values of all key points belonging to the same monitored person;
calculating the correlation between the key points according to the vector average value;
and determining the key points belonging to the same monitored person body part from all the key points belonging to the same body part according to the correlation among the key points.
Optionally, extracting the key point feature of the image to obtain first feature data and second feature data corresponding to the image includes:
Acquiring a feature map of the image; and obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, a first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and the feature data output by the last layer as input, and the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
Optionally, detecting, according to the connection diagram, a smoking behavior of a monitored person to which the connection diagram belongs includes:
Acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection diagram;
according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face;
and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
In another aspect, the present application provides a behavior detection apparatus, the apparatus comprising:
The acquisition unit is used for acquiring the image of the monitoring area acquired by the camera device;
the extraction unit is used for extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong;
the obtaining unit is used for obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and the detection unit is used for detecting the smoking behavior of the monitored personnel to which the connection diagram belongs according to the connection diagram.
Optionally, the obtaining unit includes:
a determining subunit, configured to determine, according to the first feature data and the second feature data, a key point belonging to a body part of the same monitored person;
And the connection subunit is used for connecting the key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
Optionally, the determining subunit is configured to determine all key points belonging to the same body part according to the first feature data and the second feature data; calculating vector average values of all key points belonging to the same monitored person; calculating the correlation between the key points according to the vector average value; and determining the key points belonging to the same monitored person body part from all the key points belonging to the same body part according to the correlation among the key points.
Optionally, the extracting unit is configured to obtain a feature map of the image; and obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, a first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and the feature data output by the last layer as input, and the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
Optionally, the detecting unit is configured to obtain a deflection angle of a face of the monitored person according to a key point belonging to the face in the connection graph; according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
According to the behavior detection method and device, the image of the monitoring area acquired by the camera device is acquired; extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs is detected, the smoking behavior of the monitored person in the monitoring area is detected through the characteristic data analysis corresponding to the image of the monitoring area, the smoking behavior is automatically monitored through the image analysis, so that the omission probability is reduced, and all-weather detection can be realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a behavior detection method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a preset neural network model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a connection diagram according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of a behavior detection device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a flowchart of a behavior detection method provided by an embodiment of the present application may include the following steps:
101: and acquiring an image of the monitoring area acquired by the image pick-up device. The monitoring area is an area where the monitored person is located, and the smoke exhausting behavior of the monitored person in the monitoring area is monitored by acquiring an image of the monitoring area. The camera may be a camera with a shooting range pointing to the monitoring area, and the camera may collect an image of the monitoring area, and may collect a video of the monitoring area, and obtain an image from the video, which is not limited in this embodiment.
102: And extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong.
According to the embodiment, the smoking behavior of the monitored person is monitored through image analysis, when the monitored person has the smoking behavior, the face of the monitored person deflects, the distance between a certain point in the face and the head of the hand (namely the forefront end of the hand, such as the forefront end of the middle finger) changes, and therefore, in the process of extracting the key point characteristics, all key points possibly belonging to the body part are obtained from the image, and the first characteristic data and the second characteristic data are obtained according to all the key points possibly belonging to the body part.
In one mode, a monitored person in an image is determined through an image recognition technology, feature extraction is carried out on the monitored person to obtain key points on the body part of the monitored person, and corresponding relations between the key points and the body part are recorded, so that the key points on the body part of all the monitored person in the image are used as first feature data, and the corresponding relations between the key points and the body part are used as second feature data.
In another mode, a feature map of an image is obtained, and first feature data and second feature data corresponding to the image are obtained through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and the feature data output by the last layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
As shown in fig. 2, F represents a feature map, stage 1 represents a first layer of a plurality of feature extraction layers, stage T (t≡2) represents other layers than the first layer of the plurality of feature extraction layers, the total number of feature extraction layers of the predictive neural network model is denoted as T, branch 1 represents a first feature extraction Branch, branch 2 represents a second feature extraction Branch, the first feature extraction Branch and the second feature extraction Branch can process an input through at least one operation of convolution, pooling, and full connection, and C in fig. 2 represents convolution (Convolution). As can be seen from the preset neural network model shown in fig. 2, the working principle is as follows:
the method comprises the steps of presetting a first layer of a plurality of feature extraction layers of a neural network model, taking a feature map as input, carrying out convolution, pooling and full connection processing on the feature map through a first feature extraction branch of the first layer to obtain S 1 of a first feature extraction branch output of the first layer, and carrying out convolution, pooling and full connection processing on the feature map through a second feature extraction branch of the first layer to obtain L 1 of a second feature extraction branch output of the first layer.
The output S 1、L1 of the first layer and the feature map are used as the input of the second layer, the input is convolved, pooled and fully connected through the first feature extraction branch and the second feature extraction branch of the second layer, S 2 of the first feature extraction branch output of the second layer and L 2 and … … of the second feature extraction branch output are obtained, the output S T-1、LT-1 of the T-1 layer and the feature map are used as the input of the T layer (the T layer is the last layer), the input is convolved, pooled and fully connected through the first feature extraction branch and the second feature extraction branch of the T layer, S T of the first feature extraction branch output and L T of the second feature extraction branch output of the T layer are obtained, S T of the first feature extraction branch output of the T layer is used as first feature data, and L T of the second feature extraction branch output of the T layer is used as second feature data.
As can be seen from the working principle of the preset neural network model shown in fig. 2, the relationship between the t layer and the (t+1) th layer of the plurality of feature extraction layers is: the output S t、Lt of the t-th layer and the feature map output are taken as inputs of the t+1-th layer. The S T output of the T layer predicts each key point in the image, the L T predicts the body part to which the key point belongs, and even predicts the trend of any key point in the body part, for example, the key point is the left shoulder, the left elbow can be predicted, and the left shoulder and the left elbow belong to two points in the left arm of one body part, so that the body part to which the key point belongs and which point in the body part can be predicted.
The preset neural network model can comprise 10 layers of feature extraction layers, the first feature data and the second feature data obtained through the 10 layers of feature extraction layers can meet the requirements of smoking behavior detection, and compared with more feature extraction layers, the preset neural network model with fewer layers is adopted for extracting the feature data on the basis that the detection accuracy can be ensured, so that the running time can be reduced, and the detection speed can be improved. The network architecture of the preset neural network model may be, but not limited to, a network architecture of a convolutional neural network, such as VGG (Visual Geometry Group Network) model.
In this embodiment, the training process of the preset neural network model is: and taking the characteristic diagram of one image and the real image with the key points marked manually as input of a preset neural network model, such as the characteristic diagram of a color image with the size of w (width, height and unlimited value) and the real image with 19 key points marked manually as input, and adjusting model parameters of the preset neural network model with multiple feature extraction layers.
In the process of adjusting the model parameters, the L2 norm between the preset neural network model predicted key point and the manually marked key point calculated by the Loss function (Loss) is used as a condition for ending the adjustment of the model parameters, and if the L2 norm is minimum, the adjustment of the model parameters is ended, or other conditions can be used as an ending condition for adjusting the model parameters, which is not described in detail in this embodiment.
Wherein the calculation formula of the L2 norm of each feature extraction layer is as follows:
W (p) represents the weight of the pixel point p in the image, the weight is trained, j represents the jth key point, c represents the vector sequence number of the key point connection, such as the jth key point goes to the jth+1th key point, the jth key point is connected with the jth+1th key point by the direction, the j is expressed as j-j+1 connection, the vector is regarded as from j to j+1, c represents the vector sequence number, the corresponding vector is identified, Representing predicted keypoint information (e.g. coordinates of keypoints),/>Key point information representing manual annotation,/>Representing predicted trend information,/>And representing trend information of the manual annotation.
Summing the L2 norms of all feature extraction layers to obtain the L2 norms of a preset neural network model, such asIn the process of training the preset neural network model, x j,k represents the coordinate of the j-th key point of the k-th person, if the coordinate of a pixel extracted by the preset neural network model is close to x j,k, the peak value of a normal curve, namely the maximum value of the curve, is reached, and the extracted pixel is determined to be the predicted key point of the preset neural network model. In the training process, if a predicted key point and a manually marked key point are missing (i.e. no matching key point is found), the point can be ignored in the L2 norm calculation process.
103: And obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data. Because a plurality of monitored personnel may exist in the monitoring area, the first characteristic data and the second characteristic data correspond to the plurality of monitored personnel, so that key points belonging to one monitored personnel need to be determined from the first characteristic data and the second characteristic data, and the key points of the monitored personnel are utilized to detect the smoking behavior of the monitored personnel. In the process of detecting the smoking behavior, the detection can be performed according to the gesture indicated by the connection diagram of the key points of the monitored personnel, and the corresponding connection diagram of the key points of the monitored personnel is constructed according to the requirements, and the process is as follows:
Determining key points belonging to the same monitored person body part according to the first characteristic data and the second characteristic data; and connecting the key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
That is, first, key points belonging to the same monitored person are selected from the key points of the monitored person, such as key points belonging to the face and limbs of the same monitored person, and then the key points are connected according to the body parts to which the key points belong, so as to obtain a connection diagram of the key points of the monitored person, as shown in fig. 3.
Wherein determining key points belonging to a body part of the same monitored person comprises: determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data; calculating vector average values of all key points belonging to the same monitored person; calculating the correlation between key points according to the vector average value; and determining the key points belonging to the same monitored person body part from all the key points belonging to the same body part according to the correlation among the key points.
Since the second feature data can represent the body part to which the key point belongs, there may be multiple monitored persons in one image, but it is not known to which monitored person the key point belongs, so in this embodiment, according to the body part to which the key point represented by the second feature data belongs, all the key points belonging to the same body part are selected from the first feature data, for example, all the left elbow key points and all the left shoulder key points belonging to the left upper arm are selected from the first feature data.
Selecting two key points from all key points belonging to the same body part, calculating the vector average value of the two selected key points belonging to the same monitored person, and even the vector average value of one body part belonging to the same monitored person, for example, calculating a unit vector by the following formula:
v=(xj2,k-xj1,k)/||xj2,k-xj1,k||2
k represents the kth monitored person, j1 and j2 represent two connectable key points (for example, the elbow and the wrist are connected through the arm), c represents the c-th limb, such as the arm, and the unit vector of the coordinate x j1,k of the key point j1 of the kth monitored person pointing to the coordinate x j2,k of the key point j2 is calculated through the formula Wherein whether the pixel point p falls on the limb or not needs to satisfy two conditions
0.Ltoreq.v (p-x j1,k)≤lc,k and |v *(p-xj1,k)|≤σl,lc,k denotes the limb length of the c-th limb, σ l denotes the limb width of the c-th limb, v denotes a vector perpendicular to vector v.
Limb c in the imageThe calculation formula of the vector average value of the pixel point p for the kth monitored person is as follows:
n c (p) is the number of non-zero unit vectors of the pixel point p on the c-th limb of all monitored persons, and K is the total number of monitored persons in the monitored area. Here, only one example of vector average value calculation is shown, and the present embodiment may also be calculated in other manners, which will not be described in detail.
The correlation between two key points is obtained by integrating the vector average value, and the correlation between the two key points indicates the possibility that the two key points belong to the same monitored person body part, for example, the correlation is calculated according to the following formula:
p(u)=(1-u)dj1+udj2
if all left elbow key points and all left shoulder key points belonging to the left upper arm are selected, taking one left elbow key point as the reference, calculating vector average values of all left shoulder key points and left elbow key points, obtaining the correlation between one left elbow key point and all left shoulder key points by integrating each vector average value, selecting one left shoulder key point according to the correlation, if the left shoulder key point with the largest value of the correlation is selected, and the selected left shoulder key points and left elbow key points belong to the left upper arm of the same monitored person.
By the method, all key points belonging to the same monitored person can be selected from the first characteristic data, the body part to which each key point belongs is known, and therefore a connection diagram of the key points of the monitored person can be obtained by connecting all the key points according to the body part to which each key point belongs.
104: And detecting the smoking behavior of the monitored personnel to which the connection diagram belongs according to the connection diagram.
The connection diagram is used for representing the gesture of a monitored person to which the connection diagram belongs, if the gesture of the monitored person has a smoking behavior, the gesture changes relative to the gesture of the monitored person without the smoking behavior, for example, the face of the monitored person deflects and the distance between a certain point in the face and the head (i.e. the forefront end of a hand, such as the forefront end of a middle finger) changes when the monitored person has the smoking behavior, based on the gesture, whether the gesture of the monitored person to which the connection diagram belongs is the gesture when the smoking behavior exists or not can be determined through whether the face deflects and the distance between the certain point in the face and the head, and if so, the smoking behavior of the monitored person is determined. Under the condition that the smoke exhausting behavior of the monitored person is determined, the embodiment can output the image of the monitored person so as to facilitate the monitoring of the person.
In this embodiment, one possible way to detect the smoking behavior of the monitored person to which the connection diagram belongs is as follows:
Acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection diagram; according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored personnel to which the connection diagram belongs according to the distance between the two specific key points, and if the distance between the two specific key points is smaller than the preset distance, determining that the smoking behavior of the monitored personnel to which the connection diagram belongs exists.
In this embodiment, in the process of obtaining the deflection angle of the face of the monitored person, a calibration ellipse mode may be adopted, but is not limited to, the calibration ellipse mode is adopted to determine whether the face has an up-down pitching condition, as shown in a connection diagram in fig. 3, a side view of the face is regarded as an ellipse, a y axis is a middle branching line of the ellipse, an x axis is a vertical bisector of a connecting line of eyes and mouths, and a1=a2 exists when vertical depth rotation is performed under the condition that the up-down pitching does not exist; after the vertical depth rotation, the x-axis is not the vertical bisector of the connecting line of eyes and mouths any more, and according to the property of an isosceles triangle, the calculation formula of the first side depth rotation angle is a0=1/2 (a 1-a 2).
When the face is deflected left and right, i.e. the side depth is rotated, the difference between the under-nose point and the left and right outer eye points is b_ eyeout, and meanwhile, the difference between the under-nose point and the left and right inner eye points is b_ eyein, and the difference between the under-nose point and the left and right mouth angle points is b_mouth, in order to reduce the error, the second side depth rotation angle of the face is preferably the average of these three angles, i.e. b0=1/3 (b_ eyeout +b_mouth+b_ eyein).
After a0 and B0 are obtained, solving the human face gesture according to a quasi-Newton method to obtain the deflection angle of the human face, and recording the deflection angle as a_face. The formula for calculating the distance between two specific key points of the monitored person according to the deflection angle is as follows:
(x 1,y1) and (x 2,y2) are coordinates of two specific key points used to calculate the distance, such as the coordinates of the right ear and the right hand head described above.
The behavior detection method comprises the steps of acquiring an image of a monitoring area acquired by an image pick-up device; extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs is detected, the smoking behavior of the monitored person in the monitoring area is detected through the characteristic data analysis corresponding to the image of the monitoring area, the smoking behavior is automatically monitored through the image analysis, so that the omission probability is reduced, and all-weather detection can be realized.
Corresponding to the above method embodiment, the embodiment of the present application further provides a behavior detection device, where an optional structure of the behavior detection device is shown in fig. 4, and the behavior detection device may include: an acquisition unit 10, an extraction unit 20, an acquisition unit 30, and a detection unit 40.
An acquiring unit 10, configured to acquire an image of the monitoring area acquired by the image capturing device. The monitoring area is an area where the monitored person is located, and the smoke exhausting behavior of the monitored person in the monitoring area is monitored by acquiring an image of the monitoring area. The camera may be a camera with a shooting range pointing to the monitoring area, and the camera may collect an image of the monitoring area, and may collect a video of the monitoring area, and obtain an image from the video, which is not limited in this embodiment.
The extraction unit 20 is configured to perform feature extraction on the key points of the image, so as to obtain first feature data and second feature data corresponding to the image, where the first feature data is used to represent the key points in the image, and the second feature data is used to represent the body parts to which the key points belong.
According to the embodiment, the smoking behavior of the monitored person is monitored through image analysis, when the monitored person has the smoking behavior, the face of the monitored person deflects, the distance between a certain point in the face and the head of the hand (namely the forefront end of the hand, such as the forefront end of the middle finger) changes, and therefore, in the process of extracting the key point characteristics, all key points possibly belonging to the body part are obtained from the image, and the first characteristic data and the second characteristic data are obtained according to all the key points possibly belonging to the body part.
One way for the extraction unit 20 to obtain the first feature data and the second feature data is: the extraction unit 20 acquires a feature map of the image; the first feature data and the second feature data corresponding to the image are obtained through a preset neural network model and a feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, a first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature data output by the feature map and the last layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data, and for the training process and the using process of the preset neural network model, please refer to the method embodiment, and details are omitted. The extracting unit 20 may obtain the first feature data and the second feature data in other manners, and in particular, please refer to the above embodiment, which will not be described here.
And the obtaining unit 30 is configured to obtain a connection graph of key points belonging to the same monitored person according to the first feature data and the second feature data. Because a plurality of monitored personnel may exist in the monitoring area, the first characteristic data and the second characteristic data correspond to the plurality of monitored personnel, so that key points belonging to one monitored personnel need to be determined from the first characteristic data and the second characteristic data, and the key points of the monitored personnel are utilized to detect the smoking behavior of the monitored personnel. In the process of detecting the smoking behavior, the detection can be performed according to the gesture indicated by the connection diagram of the key points of the monitored personnel, and the corresponding connection diagram of the key points of the monitored personnel needs to be constructed, so that an optional structure of the unit 30 is as follows:
The obtaining unit 30 includes: determining a subunit and connecting the subunits. The determining subunit is used for determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data; and the connection subunit is used for connecting the key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
Wherein the determining subunit determines key points belonging to the same monitored person's body part comprising: determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data; calculating vector average values of all key points belonging to the same monitored person; calculating the correlation between key points according to the vector average value; according to the correlation between the key points, the key points belonging to the same monitored person's body part are determined from all the key points belonging to the same body part, and the detailed process is referred to the above method embodiments and is not repeated here.
The detecting unit 40 is configured to detect, according to the connection diagram, a smoking behavior of a monitored person to whom the connection diagram belongs. The connection diagram is used for representing the gesture of a monitored person to which the connection diagram belongs, if the gesture of the monitored person has a smoking behavior, the gesture changes relative to the gesture of the monitored person without the smoking behavior, for example, the face of the monitored person deflects and the distance between a certain point in the face and the head (i.e. the forefront end of a hand, such as the forefront end of a middle finger) changes when the monitored person has the smoking behavior, based on the gesture, whether the gesture of the monitored person to which the connection diagram belongs is the gesture when the smoking behavior exists or not can be determined through whether the face deflects and the distance between the certain point in the face and the head, and if so, the smoking behavior of the monitored person is determined. Under the condition that the smoke exhausting behavior of the monitored person is determined, the embodiment can output the image of the monitored person so as to facilitate the monitoring of the person.
In this embodiment, one possible way for the detection unit 40 to detect the smoking behaviour of the monitored person to which the connection diagram belongs is as follows:
Acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection diagram; according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points, if the distance between the two specific key points is smaller than the preset distance, determining that the smoking behavior exists in the monitored person to which the connection diagram belongs, and detailed process is described in the above method embodiment and will not be repeated here.
The behavior detection device acquires an image of the monitoring area acquired by the camera device; extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs is detected, the smoking behavior of the monitored person in the monitoring area is detected through the characteristic data analysis corresponding to the image of the monitoring area, the smoking behavior is automatically monitored through the image analysis, so that the omission probability is reduced, and all-weather detection can be realized.
In addition, the embodiment of the application also provides a storage medium, wherein the storage medium stores computer program codes, and the computer program codes realize the behavior detection method when being executed.
The embodiment of the application also provides monitoring equipment which comprises a processor, a communication interface and a display screen. The method comprises the steps that a processor acquires an image of a monitoring area acquired by a camera through a communication interface, the processor performs key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; detecting the smoking behavior of a monitored person to which the connection diagram belongs according to the connection diagram; the display screen is configured to display an image of the monitoring area, and for the working process of the processor, please refer to the above-mentioned method embodiment, which is not described herein.
It should be noted that, each embodiment in the present specification may be described in a progressive manner, and features described in each embodiment in the present specification may be replaced or combined with each other, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are referred to each other. For the apparatus class embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference is made to the description of the method embodiments for relevant points.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (4)

1. A method of behavior detection, the method comprising:
acquiring an image of a monitoring area acquired by a camera device;
extracting key point features from the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong;
Obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
Detecting the smoking behavior of a monitored person to which the connection diagram belongs according to the connection diagram; the detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram comprises: acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection diagram; according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; detecting the smoking behavior of a monitored person to which the connection diagram belongs according to the distance between the two specific key points;
The obtaining the connection diagram of the key points belonging to the same monitored person according to the first characteristic data and the second characteristic data comprises the following steps:
Determining key points belonging to the same monitored person body part according to the first characteristic data and the second characteristic data;
Connecting key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person;
wherein, the determining the key points of the body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data comprises:
determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data;
calculating vector average values of all key points belonging to the same monitored person;
Calculating the correlation between the key points according to the vector average value; the relevance among the key points is obtained by integrating the vector average value;
and determining the key points belonging to the same monitored person body part from all the key points belonging to the same body part according to the correlation among the key points.
2. The method of claim 1, wherein the extracting the key point feature of the image to obtain the first feature data and the second feature data corresponding to the image includes:
Acquiring a feature map of the image; and obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, a first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and the feature data output by the last layer as input, and the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
3. A behavior detection apparatus, the apparatus comprising:
The acquisition unit is used for acquiring the image of the monitoring area acquired by the camera device;
the extraction unit is used for extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong;
the obtaining unit is used for obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
The detection unit is used for detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram; the detection unit is used for acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection diagram; according to the deflection angle, calculating the distance between two specific key points of the monitored person, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; detecting the smoking behavior of a monitored person to which the connection diagram belongs according to the distance between the two specific key points;
wherein the obtaining unit includes:
a determining subunit, configured to determine, according to the first feature data and the second feature data, a key point belonging to a body part of the same monitored person;
The connection subunit is used for connecting the key points belonging to the body parts of the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person;
The determining subunit is configured to determine all key points belonging to the same body part according to the first feature data and the second feature data; calculating vector average values of all key points belonging to the same monitored person; calculating the correlation between the key points according to the vector average value; the relevance among the key points is obtained by integrating the vector average value; and determining the key points belonging to the same monitored person body part from all the key points belonging to the same body part according to the correlation among the key points.
4. A device according to claim 3, wherein the extraction unit is configured to obtain a feature map of the image; and obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, a first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and the feature data output by the last layer as input, and the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
CN202010717764.8A 2020-07-23 2020-07-23 Behavior detection method and device Active CN111832526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010717764.8A CN111832526B (en) 2020-07-23 2020-07-23 Behavior detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010717764.8A CN111832526B (en) 2020-07-23 2020-07-23 Behavior detection method and device

Publications (2)

Publication Number Publication Date
CN111832526A CN111832526A (en) 2020-10-27
CN111832526B true CN111832526B (en) 2024-06-11

Family

ID=72925232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010717764.8A Active CN111832526B (en) 2020-07-23 2020-07-23 Behavior detection method and device

Country Status (1)

Country Link
CN (1) CN111832526B (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444791A (en) * 1991-09-17 1995-08-22 Fujitsu Limited Moving body recognition apparatus
CN105260703A (en) * 2015-09-15 2016-01-20 西安邦威电子科技有限公司 Detection method suitable for smoking behavior of driver under multiple postures
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108629282A (en) * 2018-03-29 2018-10-09 福州海景科技开发有限公司 A kind of smoking detection method, storage medium and computer
CN109325412A (en) * 2018-08-17 2019-02-12 平安科技(深圳)有限公司 Pedestrian recognition method, device, computer equipment and storage medium
CN109492581A (en) * 2018-11-09 2019-03-19 中国石油大学(华东) A kind of human motion recognition method based on TP-STG frame
CN109784140A (en) * 2018-11-19 2019-05-21 深圳市华尊科技股份有限公司 Driver attributes' recognition methods and Related product
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN109918975A (en) * 2017-12-13 2019-06-21 腾讯科技(深圳)有限公司 A kind of processing method of augmented reality, the method for Object identifying and terminal
CN110298257A (en) * 2019-06-04 2019-10-01 东南大学 A kind of driving behavior recognition methods based on human body multiple location feature
CN110298332A (en) * 2019-07-05 2019-10-01 海南大学 Method, system, computer equipment and the storage medium of Activity recognition
CN110309723A (en) * 2019-06-04 2019-10-08 东南大学 A kind of driving behavior recognition methods based on characteristics of human body's disaggregated classification
CN110399767A (en) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 Occupant's dangerous play recognition methods and device, electronic equipment, storage medium
CN110425005A (en) * 2019-06-21 2019-11-08 中国矿业大学 The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning
WO2019232894A1 (en) * 2018-06-05 2019-12-12 中国石油大学(华东) Complex scene-based human body key point detection system and method
CN110688921A (en) * 2019-09-17 2020-01-14 东南大学 Method for detecting smoking behavior of driver based on human body action recognition technology
CN110781771A (en) * 2019-10-08 2020-02-11 北京邮电大学 Abnormal behavior real-time monitoring method based on deep learning
CN110837815A (en) * 2019-11-15 2020-02-25 济宁学院 Driver state monitoring method based on convolutional neural network
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
WO2020093837A1 (en) * 2018-11-07 2020-05-14 北京达佳互联信息技术有限公司 Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium
CN111178323A (en) * 2020-01-10 2020-05-19 北京百度网讯科技有限公司 Video-based group behavior identification method, device, equipment and storage medium
CN111310542A (en) * 2019-12-02 2020-06-19 湖南中烟工业有限责任公司 Smoking behavior detection method and system, terminal and storage medium
CN111414813A (en) * 2020-03-03 2020-07-14 南京领行科技股份有限公司 Dangerous driving behavior identification method, device, equipment and storage medium
US10713948B1 (en) * 2019-01-31 2020-07-14 StradVision, Inc. Method and device for alerting abnormal driver situation detected by using humans' status recognition via V2V connection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3547211B1 (en) * 2018-03-30 2021-11-17 Naver Corporation Methods for training a cnn and classifying an action performed by a subject in an inputted video using said cnn
WO2019216593A1 (en) * 2018-05-11 2019-11-14 Samsung Electronics Co., Ltd. Method and apparatus for pose processing
GB201810309D0 (en) * 2018-06-22 2018-08-08 Microsoft Technology Licensing Llc System for predicting articulatd object feature location

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444791A (en) * 1991-09-17 1995-08-22 Fujitsu Limited Moving body recognition apparatus
CN105260703A (en) * 2015-09-15 2016-01-20 西安邦威电子科技有限公司 Detection method suitable for smoking behavior of driver under multiple postures
CN110399767A (en) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 Occupant's dangerous play recognition methods and device, electronic equipment, storage medium
CN109918975A (en) * 2017-12-13 2019-06-21 腾讯科技(深圳)有限公司 A kind of processing method of augmented reality, the method for Object identifying and terminal
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108629282A (en) * 2018-03-29 2018-10-09 福州海景科技开发有限公司 A kind of smoking detection method, storage medium and computer
WO2019232894A1 (en) * 2018-06-05 2019-12-12 中国石油大学(华东) Complex scene-based human body key point detection system and method
CN109325412A (en) * 2018-08-17 2019-02-12 平安科技(深圳)有限公司 Pedestrian recognition method, device, computer equipment and storage medium
WO2020093837A1 (en) * 2018-11-07 2020-05-14 北京达佳互联信息技术有限公司 Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium
CN109492581A (en) * 2018-11-09 2019-03-19 中国石油大学(华东) A kind of human motion recognition method based on TP-STG frame
CN109784140A (en) * 2018-11-19 2019-05-21 深圳市华尊科技股份有限公司 Driver attributes' recognition methods and Related product
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
US10713948B1 (en) * 2019-01-31 2020-07-14 StradVision, Inc. Method and device for alerting abnormal driver situation detected by using humans' status recognition via V2V connection
CN110298257A (en) * 2019-06-04 2019-10-01 东南大学 A kind of driving behavior recognition methods based on human body multiple location feature
CN110309723A (en) * 2019-06-04 2019-10-08 东南大学 A kind of driving behavior recognition methods based on characteristics of human body's disaggregated classification
CN110425005A (en) * 2019-06-21 2019-11-08 中国矿业大学 The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning
CN110298332A (en) * 2019-07-05 2019-10-01 海南大学 Method, system, computer equipment and the storage medium of Activity recognition
CN110688921A (en) * 2019-09-17 2020-01-14 东南大学 Method for detecting smoking behavior of driver based on human body action recognition technology
CN110781771A (en) * 2019-10-08 2020-02-11 北京邮电大学 Abnormal behavior real-time monitoring method based on deep learning
CN110837815A (en) * 2019-11-15 2020-02-25 济宁学院 Driver state monitoring method based on convolutional neural network
CN111310542A (en) * 2019-12-02 2020-06-19 湖南中烟工业有限责任公司 Smoking behavior detection method and system, terminal and storage medium
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
CN111178323A (en) * 2020-01-10 2020-05-19 北京百度网讯科技有限公司 Video-based group behavior identification method, device, equipment and storage medium
CN111414813A (en) * 2020-03-03 2020-07-14 南京领行科技股份有限公司 Dangerous driving behavior identification method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Android系统的司机驾驶安全监测系统的实现;汪旭;陈仁文;黄斌;;电子测量技术(第08期);第56-60页 *

Also Published As

Publication number Publication date
CN111832526A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
WO2021114892A1 (en) Environmental semantic understanding-based body movement recognition method, apparatus, device, and storage medium
CN110711374B (en) Multi-modal dance action evaluation method
CN111104816B (en) Object gesture recognition method and device and camera
KR101434768B1 (en) Moving object tracking system and moving object tracking method
KR102462818B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN106570491A (en) Robot intelligent interaction method and intelligent robot
CN109544523B (en) Method and device for evaluating quality of face image based on multi-attribute face comparison
CN112287868B (en) Human body action recognition method and device
WO2021098147A1 (en) Vr motion sensing data detection method and apparatus, computer device, and storage medium
CN110969078A (en) Abnormal behavior identification method based on human body key points
CN107944395A (en) A kind of method and system based on neutral net verification testimony of a witness unification
CN108537156B (en) Anti-shielding hand key node tracking method
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
KR102369152B1 (en) Realtime Pose recognition system using artificial intelligence and recognition method
JP2011221699A (en) Operation instruction recognition device and robot
CN111832526B (en) Behavior detection method and device
KR20230080938A (en) Method and apparatus of gesture recognition and classification using convolutional block attention module
CN116363600B (en) Method and system for predicting maintenance operation risk of motor train unit
CN110705605B (en) Method, device, system and storage medium for establishing feature database and identifying actions
CN111753796A (en) Method and device for identifying key points in image, electronic equipment and storage medium
CN111860107A (en) Standing long jump evaluation method based on deep learning attitude estimation
CN114639168A (en) Method and system for running posture recognition
JP6810442B2 (en) A camera assembly, a finger shape detection system using the camera assembly, a finger shape detection method using the camera assembly, a program for implementing the detection method, and a storage medium for the program.
CN112949587B (en) Hand holding gesture correction method, system and computer readable medium based on key points
CN113408435A (en) Safety monitoring method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant