CN111507182A - Skeleton point fusion cyclic cavity convolution-based littering behavior detection method - Google Patents
Skeleton point fusion cyclic cavity convolution-based littering behavior detection method Download PDFInfo
- Publication number
- CN111507182A CN111507182A CN202010167698.1A CN202010167698A CN111507182A CN 111507182 A CN111507182 A CN 111507182A CN 202010167698 A CN202010167698 A CN 202010167698A CN 111507182 A CN111507182 A CN 111507182A
- Authority
- CN
- China
- Prior art keywords
- image
- littering
- heat map
- convolution
- garbage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention discloses a method for detecting a littering garbage behavior based on a skeleton point fusion circular cavity convolution. The method comprises the steps of collecting an image set containing a random garbage behavior for pre-training; acquiring an image training set of individuals with littering behaviors in an image set after pre-training, and manually defining the distribution of human skeleton points of the littering behaviors for the images; making a skeleton point heat map of each image in an image training set based on the distribution of the skeleton points of the human body; constructing a random garbage behavior detection network based on the skeleton point fusion circular cavity convolution; inputting the pre-trained image set into a random-loss garbage behavior detection network, and iteratively updating the network by using a gradient descent method to obtain an optimal random-loss garbage behavior detection network; and inputting the detection images of continuous frames in the test set into an optimal random garbage behavior detection network, acquiring a corresponding bone point distribution sequence, calculating the similarity, and judging whether the random garbage behavior is the random garbage behavior. The invention can accurately detect the garbled garbage behavior in a complex scene.
Description
Technical Field
The invention relates to computer vision and image recognition, in particular to a method for detecting a littering garbage behavior based on skeleton point fusion circular hole convolution, and belongs to the technical field of computer vision image processing.
Background
At present, the state is in the stage of high-speed urbanization, and the environmental problem is the problem to be solved urgently at present. The method for detecting the littering rubbish behaviors by utilizing the current image processing technology has important significance for building beautiful towns. There is a parabolic recognition method based on image semantic information, which is based on track detection and judges whether garbage is littered or not by drawing a track of a thrown article and a background boundary. Due to the mature application of deep learning in the aspect of image recognition, a method for detecting littering garbage behaviors based on human body posture estimation is provided, the environment dependence can be reduced, and the application range is wider. However, human body posture detection has some defects, human body posture estimation is realized by combining a local detector with structural constraint in the early stage, with the application of a convolutional neural network, although the detection performance is greatly improved, the convolution process can cause the loss of image information, and under the conditions of high shielding, low background environment and human body distinction and the like, wrong bone point estimation can be generated.
Disclosure of Invention
The invention aims to provide a method for detecting a litter loss behavior based on a skeleton point fusion circular cavity convolution aiming at the defects of the prior art, which can adapt to litter loss behavior detection in a complex environment and solve the problems that the existing method for detecting human body postures is difficult to accurately estimate human skeleton points in a video under the shielding and complex environment and has large calculation amount.
A littering garbage behavior detection method based on skeleton point fusion circular cavity convolution comprises the following steps:
step (S1) collecting a training set of random garbage behavior images for pre-training;
step (S2) obtaining individuals of the garbage random behavior through a target detection algorithm, and manually defining the distribution of human skeleton points of the garbage random behavior in a training set;
a step (S3) of creating a bone point heat map of the training image based on the step S2;
step (S4) constructing a network architecture of a garbled garbage behavior detection method based on skeleton point fusion circular cavity convolution;
step (S5) inputting a training network of a littering garbage image set, and iteratively updating the network by using a gradient descent method;
and (S6) inputting detection images of several continuous frames to obtain a corresponding bone point distribution sequence, calculating the similarity, and judging whether the behavior is a random garbage loss behavior.
The invention has the following advantages:
1. the dependence of behavior detection on a background environment can be reduced by utilizing the powerful self-learning property of the neural network and detecting the human posture, and the application range is expanded;
2. the hole convolution is introduced, so that the receptive field of the input image can be increased under the condition of not increasing additional parameters, and the computational complexity is greatly reduced;
3. through the context semantic information carried by the circulation module, the joint points of the shielded part can be deduced from the distribution of other key points of the body.
Drawings
FIG. 1 is a flow diagram of an overall embodiment of the present invention;
FIG. 2 is a schematic of a training model of the present invention;
FIG. 3 is a schematic diagram of the cyclic hole convolution of the present invention;
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the method for detecting the garbled garbage behavior based on the convolution of the skeleton point fusion circular hole comprises the following steps:
step (S1) collecting an image set containing a random garbage behavior, and pre-training;
step (S2) obtaining an image training set of pre-trained image set random garbage behavior individuals through a target detection algorithm, and manually defining the distribution of human skeleton points of random garbage behavior for the images in the obtained image training set;
step (S3) making a skeleton point heat map of each image in the image training set based on the distribution of the skeleton points of the human body;
step (S4) constructing a littering garbage behavior detection network based on the skeleton point fusion circular cavity convolution;
step (S5) inputting the pre-trained image set into a random-loss garbage behavior detection network, and iteratively updating the network by using a gradient descent method to obtain an optimal random-loss garbage behavior detection network;
and (S6) inputting the detection images of several continuous frames in the test set into the optimal litter loss behavior detection network, acquiring a corresponding skeleton point distribution sequence, calculating the similarity, and judging whether the litter loss behavior is the litter loss behavior.
Further, the step (S1) specifically includes the following steps:
s1.1, selecting different fields, placing monitoring cameras at different angles, and collecting videos of garbage throwing behaviors of different people and individuals;
s1.2, the collected videos are sorted, continuous frames of each video are extracted, converted into a corresponding image sequence and stored in an original human body posture data set, and meanwhile, the image sequence I is manually classified; the manual classification refers to that the garbage of the same person in the same place and the same time period is taken as a class;
s1.3, performing data enhancement pretreatment on the stored original human body posture data set, and processing an image by using Gaussian filtering so as to reduce image noise and obtain a pretreated human body posture data set I.
Further, the step (S2) specifically includes the following steps:
s2.1, carrying out a target detection algorithm on each image in the human body posture data set I processed in the step S1.3 to identify an individual person, eliminating irrelevant information in the image and obtaining a human body posture data set II;
s2.2, manually marking the distribution of each human skeleton point of the individual character with littering rubbish in the human posture data set II to obtain a human skeleton point distribution image set.
Further, the step (S3) specifically includes the following steps:
s3.1, generating a corresponding Gaussian heat map for each image in the human body bone point distribution image set by taking the coordinate of each bone point as the center, and generating a blank map for the missing bone points in the image; for example: if one of the 7 skeletal points is missing, 6 gaussian heat maps and a blank map are generated; the corresponding Gaussian heat map after connecting each bone point according to the human body structure is called as a body part heat map;
s3.2, according to the human body skeleton point distribution image set, correspondingly connecting skeleton points in each image according to a human body structure to obtain a skeleton point connection image, marking the connection positions among the skeleton points in the image as joint points, generating a corresponding Gaussian heat image by taking each joint point as a coordinate center, and simultaneously connecting the corresponding Gaussian heat images according to the human body structure to obtain joint point heat images; obtaining a skeleton point heat map by fusing the joint point heat map and the corresponding body part heat map, thereby obtaining the human body posture of the littering behavior and establishing a corresponding littering human body posture database;
further, the step (S4) specifically includes the following steps:
s4.1, constructing a littering garbage behavior detection network based on skeleton point fusion circular cavity convolution, wherein the network consists of 4 layers of networks, the first two layers are forward propagation modules, the third layer is a circular module introducing cavity convolution, and the 4 th layer is an output skeleton point heat map and a loss value;
s4.2, the forward propagation modules (the 1 st layer and the 2 nd layer) of the first two layers use 3 x 3 filter convolution kernels, then pooling is carried out, the resolution of the input picture is reduced, and the features are further extracted; and each convolution is added with a nonlinear activation function to ensure the convolution performance;
s4.3, the output of the forward propagation module of the second layer is connected with the output of the third layer in series and is used as the input of the circulation module of the third layer; 3 hole convolution networks are iterated in the circulation module, in each iteration, the output of the forward propagation module of the second layer is fixed as the input of the circulation module of the third layer, and the output of the third layer is updated according to the output of the previous iteration;
the cavity convolution network introduced by the circulation module is provided with three filters with convolution kernels of 3 and cavity rates of 1, 2 and 5 respectively, and due to the property of cavity convolution, the cavity convolution network can enlarge the receptive field and acquire semantic information on different receptive fields without adding extra parameters, thereby greatly simplifying the computational complexity;
s4.4, the fourth layer has four parts of outputs, and the target heat map and the loss value of different depths are respectively output; each part of the target heat map comprises a bone point heat map and a body part heat map, wherein the body part heat map is mainly used for capturing limbs and can be used as data enhancement of modeling of the joint point heat map; the bone point heat map and loss values of the first three parts are used as auxiliary information to improve the gradient amplitude during back propagation; the skeleton point heat map output by the fourth part is the final result of the network prediction, and meanwhile, the loss value of the part is used for optimizing parameters in the training process of the network;
further, the step (S5) specifically includes the following steps:
s5.1, loading the pre-trained network parameters into the garbled behavior detection network, completing the initialization of the network parameters, and setting the learning rate to 10-5(step by step down to 10-6);
S5.2, inputting the human body posture data set II into an initialized random garbage behavior detection network, training by adopting an Adam optimizer, dynamically adjusting the learning rate of each parameter by utilizing first moment estimation and second moment estimation of gradients, and after offset correction, determining the range of each iteration learning rate to enable the parameters to be stable;
in the training process, the bone point heat map output by the fourth layer is weighted equally, the pixel variance between the predicted bone point heat map and the artificially marked bone point heat map is calculated by using the mean square error, and the training function is as follows:
wherein h iskThe method comprises the following steps that (1) an artificially marked real skeleton point heat map is obtained, f (x) represents a feature map iterative training model, a predicted skeleton point heat map can be obtained after training, theta represents a learning parameter, x represents a feature map, y represents iteration times, and K represents the total number of the skeleton point heat maps; s5.3, completing training of the littering garbage behavior detection network by using a back propagation and gradient descent method;
further, the step (S6) specifically includes the following steps:
s6.1, extracting detection images of continuous frames from a video to be detected, inputting the detection images into a trained littering behavior detection network, and connecting output skeleton point heat maps to obtain a human body posture map sequence;
s6.2, similarity calculation is carried out on the obtained human body posture graph sequence and a sequence in a littering rubbish human body posture database, if the similarity is smaller than a threshold value, littering rubbish is judged, and if not, the sequence is judged to be littering rubbish; the similarity formula is as follows:
wherein s iskThe method comprises the steps of representing the K-th bone point, a representing a detection result, b representing a result of a manually defined random garbage behavior database, K representing the number of all the bone points, and the smaller S, the higher the similarity.
The schematic diagram of the constructed training model refers to fig. 2, wherein Conv represents the convolution process, Rt _ x represents the xth output skeleton point heat map, and lossx represents the xth loss value:
the model of the invention consists of four layers of networks, the first two layers are forward propagation modules, the third layer is a circular structure with cavity convolution introduced, and the 4 th layer is an output skeleton point heat map and a loss value;
the initial convolution layer (first layer) of the forward propagation module we used a 3 x 3 filter, followed by pooling at step 2, reducing the resolution of the input pictures, and further extracting features. The second layer uses a larger convolution filter (5 x 5) to learn more body structures, and a nonlinear activation function Relu is used after each convolution to ensure convolution performance.
The output of the layer two network is used as the initial input of the layer three loop module. Inside the loop module, 3 hole convolutions are iterated, and each iteration, the input of the network is the output combining the second layer and the last iteration update result. Each iteration sets and outputs a target heat map and loss values, the target heat map comprises a joint point heat map and a body part heat map, the body part heat map is mainly used for capturing limbs and can be used as data enhancement of modeling of the joint point heat map and used as an auxiliary loss function to improve gradient amplitude during back propagation, and parameter setting of the network is optimized.
The schematic diagram of the constructed cyclic void convolution structure refers to fig. 3, wherein K represents the size of a convolution kernel, and Rate represents the void Rate:
the hole convolution is firstly applied to the field of image segmentation, and due to the excellence of the hole convolution, under the condition that no additional parameter is added, the feature information of different receptive fields of an input image can be obtained by adding a hyper-parameter which is called the hole rate, so that the process of extracting features is greatly simplified, and the feature information is also introduced into crowd counting. According to the invention, three filters with convolution kernels of 3 × 3 and void ratios of 1, 2 and 5 are arranged to acquire characteristics containing different information, the step length is set to be 1, and high resolution can be ensured.
Claims (7)
1. A littering garbage behavior detection method based on skeleton point fusion circular cavity convolution is characterized by comprising the following steps:
step (S1) collecting an image set containing a random garbage behavior, and pre-training;
step (S2) obtaining an image training set of pre-trained image set random garbage behavior individuals through a target detection algorithm, and manually defining the distribution of human skeleton points of random garbage behavior for the images in the obtained image training set;
step (S3) making a skeleton point heat map of each image in the image training set based on the distribution of the skeleton points of the human body;
step (S4) constructing a littering garbage behavior detection network based on the skeleton point fusion circular cavity convolution;
step (S5) inputting the pre-trained image set into a random-loss garbage behavior detection network, and iteratively updating the network by using a gradient descent method to obtain an optimal random-loss garbage behavior detection network;
and (S6) inputting the detection images of several continuous frames in the test set into the optimal litter loss behavior detection network, acquiring a corresponding skeleton point distribution sequence, calculating the similarity, and judging whether the litter loss behavior is the litter loss behavior.
2. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 1, wherein the specific process of the step (S1) is as follows:
s1.1, selecting different fields, placing monitoring cameras at different angles, and collecting videos of garbage throwing behaviors of different people and individuals;
s1.2, the collected videos are sorted, continuous frames of each video are extracted, converted into a corresponding image sequence and stored in an original human body posture data set, and meanwhile, the image sequence I is manually classified; the manual classification refers to that the garbage of the same person in the same place and the same time period is taken as a class;
s1.3, performing data enhancement pretreatment on the stored original human body posture data set, and processing an image by using Gaussian filtering so as to reduce image noise and obtain a pretreated human body posture data set I.
3. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 2, wherein the specific process of the step (S2) is as follows:
s2.1, carrying out a target detection algorithm on each image in the human body posture data set I processed in the step S1.3 to identify an individual person, eliminating irrelevant information in the image and obtaining a human body posture data set II;
s2.2, manually marking the distribution of each human skeleton point of the individual character with littering rubbish in the human posture data set II to obtain a human skeleton point distribution image set.
4. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 3, wherein the specific process of the step (S3) is as follows:
s3.1, generating a corresponding Gaussian heat map for each image in the human body bone point distribution image set by taking the coordinate of each bone point as the center, and generating a blank map for the missing bone points in the image; the corresponding Gaussian heat map after connecting each bone point according to the human body structure is called as a body part heat map;
s3.2, according to the human body skeleton point distribution image set, correspondingly connecting skeleton points in each image according to a human body structure to obtain a skeleton point connection image, marking the connection positions among the skeleton points in the image as joint points, generating a corresponding Gaussian heat image by taking each joint point as a coordinate center, and simultaneously connecting the corresponding Gaussian heat images according to the human body structure to obtain joint point heat images; and obtaining a skeleton point heat map by fusing the joint point heat map and the corresponding body part heat map, thereby obtaining the human body posture of the littering behavior and establishing a corresponding littering human body posture database.
5. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 4, wherein the specific process of the step (S4) is as follows:
s4.1, constructing a littering garbage behavior detection network based on skeleton point fusion circular cavity convolution, wherein the network consists of 4 layers of networks, the first two layers are forward propagation modules, the third layer is a circular module introducing cavity convolution, and the 4 th layer is an output skeleton point heat map and a loss value;
s4.2, the forward propagation modules (the 1 st layer and the 2 nd layer) of the first two layers use 3 x 3 filter convolution kernels, then pooling is carried out, the resolution of the input picture is reduced, and the features are further extracted; and a nonlinear activation function Relu is added to each convolution to ensure the convolution performance;
s4.3, the output of the forward propagation module of the second layer is connected with the output of the third layer in series and is used as the input of the circulation module of the third layer; 3 hole convolution networks are iterated in the circulation module, in each iteration, the output of the forward propagation module of the second layer is fixed as the input of the circulation module of the third layer, and the output of the third layer is updated according to the output of the previous iteration;
the cavity convolution network introduced by the circulation module is provided with three filters with convolution kernels of 3 and cavity rates of 1, 2 and 5 respectively, and due to the property of cavity convolution, the cavity convolution network can enlarge the receptive field and acquire semantic information on different receptive fields without adding extra parameters, thereby greatly simplifying the computational complexity;
s4.4, the fourth layer has four parts of outputs, and the target heat map and the loss value of different depths are respectively output; each part of the target heat map comprises a bone point heat map and a body part heat map, wherein the body part heat map is mainly used for capturing limbs and can be used as data enhancement of modeling of the joint point heat map; the bone point heat map and loss values of the first three parts are used as auxiliary information to improve the gradient amplitude during back propagation; the skeletal point heat map output by the fourth part is the final result of the network prediction, and the loss value of the part is used for optimizing the parameters in the training process of the network.
6. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 5, wherein the specific process of the step (S5) is as follows:
s5.1, loading the pre-trained network parameters into the garbled behavior detection network, completing the initialization of the network parameters, and setting the learning rate to 10-5;
S5.2, inputting the human body posture data set II into an initialized random garbage behavior detection network, training by adopting an Adam optimizer, dynamically adjusting the learning rate of each parameter by utilizing first moment estimation and second moment estimation of gradients, and after offset correction, determining the range of each iteration learning rate to enable the parameters to be stable;
in the training process, the bone point heat map output by the fourth layer is weighted equally, the pixel variance between the predicted bone point heat map and the artificially marked bone point heat map is calculated by using the mean square error, and the training function is as follows:
wherein h iskThe method comprises the following steps that (1) an artificially marked real skeleton point heat map is obtained, f (x) represents a feature map iterative training model, a predicted skeleton point heat map can be obtained after training, theta represents a learning parameter, x represents a feature map, y represents iteration times, and K represents the total number of the skeleton point heat maps;
and S5.3, finishing training of the littering garbage behavior detection network by using a back propagation and gradient descent method.
7. The method for detecting the littering garbage behavior based on the convolution of the bone point fusion circular hole as claimed in claim 6, wherein the specific process of the step (S6) is as follows:
s6.1, extracting detection images of continuous frames from a video to be detected, inputting the detection images into a trained littering behavior detection network, and connecting output skeleton point heat maps to obtain a human body posture map sequence;
s6.2, similarity calculation is carried out on the obtained human body posture graph sequence and a sequence in a littering rubbish human body posture database, if the similarity is smaller than a threshold value, littering rubbish is judged, and if not, the sequence is judged to be littering rubbish; the similarity formula is as follows:
wherein s iskThe method comprises the steps of representing the K-th bone point, a representing a detection result, b representing a result of a manually defined random garbage behavior database, K representing the number of all the bone points, and the smaller S, the higher the similarity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010167698.1A CN111507182B (en) | 2020-03-11 | 2020-03-11 | Skeleton point fusion cyclic cavity convolution-based littering behavior detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010167698.1A CN111507182B (en) | 2020-03-11 | 2020-03-11 | Skeleton point fusion cyclic cavity convolution-based littering behavior detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507182A true CN111507182A (en) | 2020-08-07 |
CN111507182B CN111507182B (en) | 2021-03-16 |
Family
ID=71863907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010167698.1A Active CN111507182B (en) | 2020-03-11 | 2020-03-11 | Skeleton point fusion cyclic cavity convolution-based littering behavior detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507182B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115846A (en) * | 2020-09-15 | 2020-12-22 | 上海迥灵信息技术有限公司 | Identification method and identification device for littering garbage behavior and readable storage medium |
CN112418418A (en) * | 2020-11-11 | 2021-02-26 | 江苏禹空间科技有限公司 | Data processing method and device based on neural network, storage medium and server |
CN112541891A (en) * | 2020-12-08 | 2021-03-23 | 山东师范大学 | Crowd counting method and system based on void convolution high-resolution network |
CN112861723A (en) * | 2021-02-07 | 2021-05-28 | 北京卓视智通科技有限责任公司 | Physical exercise recognition counting method and device based on human body posture recognition and computer readable storage medium |
CN113392765A (en) * | 2021-06-15 | 2021-09-14 | 广东工业大学 | Tumble detection method and system based on machine vision |
CN113435419A (en) * | 2021-08-26 | 2021-09-24 | 城云科技(中国)有限公司 | Illegal garbage discarding behavior detection method, device and application |
WO2022136915A1 (en) * | 2020-12-21 | 2022-06-30 | Future Health Works Ltd. | Joint angle determination under limited visibility |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930767A (en) * | 2016-04-06 | 2016-09-07 | 南京华捷艾米软件科技有限公司 | Human body skeleton-based action recognition method |
CN108399361A (en) * | 2018-01-23 | 2018-08-14 | 南京邮电大学 | A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation |
CN108549841A (en) * | 2018-03-21 | 2018-09-18 | 南京邮电大学 | A kind of recognition methods of the Falls Among Old People behavior based on deep learning |
CN108647639A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | Real-time body's skeletal joint point detecting method |
US10162844B1 (en) * | 2017-06-22 | 2018-12-25 | NewVoiceMedia Ltd. | System and methods for using conversational similarity for dimension reduction in deep analytics |
CN110334589A (en) * | 2019-05-23 | 2019-10-15 | 中国地质大学(武汉) | A kind of action identification method of the high timing 3D neural network based on empty convolution |
CN110348445A (en) * | 2019-06-06 | 2019-10-18 | 华中科技大学 | A kind of example dividing method merging empty convolution sum marginal information |
CN110443144A (en) * | 2019-07-09 | 2019-11-12 | 天津中科智能识别产业技术研究院有限公司 | A kind of human body image key point Attitude estimation method |
WO2019222951A1 (en) * | 2018-05-24 | 2019-11-28 | Nokia Technologies Oy | Method and apparatus for computer vision |
CN110751107A (en) * | 2019-10-23 | 2020-02-04 | 北京精英系统科技有限公司 | Method for detecting event of discarding articles by personnel |
-
2020
- 2020-03-11 CN CN202010167698.1A patent/CN111507182B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930767A (en) * | 2016-04-06 | 2016-09-07 | 南京华捷艾米软件科技有限公司 | Human body skeleton-based action recognition method |
US10162844B1 (en) * | 2017-06-22 | 2018-12-25 | NewVoiceMedia Ltd. | System and methods for using conversational similarity for dimension reduction in deep analytics |
CN108399361A (en) * | 2018-01-23 | 2018-08-14 | 南京邮电大学 | A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation |
CN108549841A (en) * | 2018-03-21 | 2018-09-18 | 南京邮电大学 | A kind of recognition methods of the Falls Among Old People behavior based on deep learning |
CN108647639A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | Real-time body's skeletal joint point detecting method |
WO2019222951A1 (en) * | 2018-05-24 | 2019-11-28 | Nokia Technologies Oy | Method and apparatus for computer vision |
CN110334589A (en) * | 2019-05-23 | 2019-10-15 | 中国地质大学(武汉) | A kind of action identification method of the high timing 3D neural network based on empty convolution |
CN110348445A (en) * | 2019-06-06 | 2019-10-18 | 华中科技大学 | A kind of example dividing method merging empty convolution sum marginal information |
CN110443144A (en) * | 2019-07-09 | 2019-11-12 | 天津中科智能识别产业技术研究院有限公司 | A kind of human body image key point Attitude estimation method |
CN110751107A (en) * | 2019-10-23 | 2020-02-04 | 北京精英系统科技有限公司 | Method for detecting event of discarding articles by personnel |
Non-Patent Citations (3)
Title |
---|
W YAN ET AL: "Speeding Up Dilated Convolution Based Pedestrian Detection with Tensor Decomposition", 《LECTURE NOTES IN COMPUTER SCIENCE》 * |
YALI NIE ET AL: "A Multi-Stage Convolution Machine with Scaling and Dilation for Human Pose Estimation", 《KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS》 * |
沈文祥等: "基于多级特征和混合注意力机制的室内人群检测网络", 《计算机应用》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115846A (en) * | 2020-09-15 | 2020-12-22 | 上海迥灵信息技术有限公司 | Identification method and identification device for littering garbage behavior and readable storage medium |
CN112115846B (en) * | 2020-09-15 | 2024-03-01 | 上海迥灵信息技术有限公司 | Method and device for identifying random garbage behavior and readable storage medium |
CN112418418A (en) * | 2020-11-11 | 2021-02-26 | 江苏禹空间科技有限公司 | Data processing method and device based on neural network, storage medium and server |
CN112541891A (en) * | 2020-12-08 | 2021-03-23 | 山东师范大学 | Crowd counting method and system based on void convolution high-resolution network |
WO2022136915A1 (en) * | 2020-12-21 | 2022-06-30 | Future Health Works Ltd. | Joint angle determination under limited visibility |
GB2618452A (en) * | 2020-12-21 | 2023-11-08 | Future Health Works Ltd | Joint angle determination under limited visibility |
CN112861723A (en) * | 2021-02-07 | 2021-05-28 | 北京卓视智通科技有限责任公司 | Physical exercise recognition counting method and device based on human body posture recognition and computer readable storage medium |
CN112861723B (en) * | 2021-02-07 | 2023-09-01 | 北京卓视智通科技有限责任公司 | Sports action recognition counting method and device based on human body gesture recognition and computer readable storage medium |
CN113392765A (en) * | 2021-06-15 | 2021-09-14 | 广东工业大学 | Tumble detection method and system based on machine vision |
CN113392765B (en) * | 2021-06-15 | 2023-08-01 | 广东工业大学 | Tumble detection method and system based on machine vision |
CN113435419A (en) * | 2021-08-26 | 2021-09-24 | 城云科技(中国)有限公司 | Illegal garbage discarding behavior detection method, device and application |
Also Published As
Publication number | Publication date |
---|---|
CN111507182B (en) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111507182B (en) | Skeleton point fusion cyclic cavity convolution-based littering behavior detection method | |
CN113065558B (en) | Lightweight small target detection method combined with attention mechanism | |
CN105678284B (en) | A kind of fixed bit human body behavior analysis method | |
CN110852267B (en) | Crowd density estimation method and device based on optical flow fusion type deep neural network | |
CN112052886A (en) | Human body action attitude intelligent estimation method and device based on convolutional neural network | |
CN104615983A (en) | Behavior identification method based on recurrent neural network and human skeleton movement sequences | |
CN106204646A (en) | Multiple mobile object tracking based on BP neutral net | |
CN110458046B (en) | Human motion trajectory analysis method based on joint point extraction | |
CN105488456A (en) | Adaptive rejection threshold adjustment subspace learning based human face detection method | |
CN111709321A (en) | Human behavior recognition method based on graph convolution neural network | |
CN111832484A (en) | Loop detection method based on convolution perception hash algorithm | |
CN107146237A (en) | A kind of method for tracking target learnt based on presence with estimating | |
CN112597814A (en) | Improved Openpos classroom multi-person abnormal behavior and mask wearing detection method | |
CN107945210A (en) | Target tracking algorism based on deep learning and environment self-adaption | |
CN113989261A (en) | Unmanned aerial vehicle visual angle infrared image photovoltaic panel boundary segmentation method based on Unet improvement | |
CN116342953A (en) | Dual-mode target detection model and method based on residual shrinkage attention network | |
CN112767277B (en) | Depth feature sequencing deblurring method based on reference image | |
CN111401209B (en) | Action recognition method based on deep learning | |
CN113205060A (en) | Human body action detection method adopting circulatory neural network to judge according to bone morphology | |
CN116246338B (en) | Behavior recognition method based on graph convolution and transducer composite neural network | |
CN114897728A (en) | Image enhancement method and device, terminal equipment and storage medium | |
CN114694261A (en) | Video three-dimensional human body posture estimation method and system based on multi-level supervision graph convolution | |
CN113159007A (en) | Gait emotion recognition method based on adaptive graph convolution | |
CN112446253A (en) | Skeleton behavior identification method and device | |
Sun et al. | Kinect depth recovery via the cooperative profit random forest algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20200807 Assignee: Hangzhou Greentown Information Technology Co.,Ltd. Assignor: HANGZHOU DIANZI University Contract record no.: X2023330000109 Denomination of invention: Detection of littering behavior based on bone point fusion and cyclic void convolution Granted publication date: 20210316 License type: Common License Record date: 20230311 |
|
EE01 | Entry into force of recordation of patent licensing contract |