CN111950500A - Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment - Google Patents

Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment Download PDF

Info

Publication number
CN111950500A
CN111950500A CN202010852184.XA CN202010852184A CN111950500A CN 111950500 A CN111950500 A CN 111950500A CN 202010852184 A CN202010852184 A CN 202010852184A CN 111950500 A CN111950500 A CN 111950500A
Authority
CN
China
Prior art keywords
tiny
improved yolov3
network model
sample
yolov3
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010852184.XA
Other languages
Chinese (zh)
Inventor
周军
于傲泽
龙羽
徐菱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ruixinxing Technology Co ltd
Original Assignee
Chengdu Ruixinxing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ruixinxing Technology Co ltd filed Critical Chengdu Ruixinxing Technology Co ltd
Priority to CN202010852184.XA priority Critical patent/CN111950500A/en
Publication of CN111950500A publication Critical patent/CN111950500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment, which comprises the following steps: acquiring a pedestrian image in a factory environment, and carrying out additional labeling on the lower half body and the body part of the human body on the extracted image to obtain a training set and a test set; constructing an improved YOLOv3-tiny network model; carrying out iterative training on the training set by using the improved YOLOv3-tiny network model to obtain the characteristics corresponding to the training set by learning, and obtaining the trained improved YOLOv3-tiny network model; and importing the images of the test set into the trained improved YOLOv3-tiny network model to obtain a pedestrian pre-selection frame corresponding to the images of the test set. Through the scheme, the method has the advantages of simple logic, low investment cost, high calculation efficiency and the like, and has high practical value and popularization value in the technical field of pedestrian detection.

Description

Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment
Technical Field
The invention relates to the technical field of pedestrian detection, in particular to a real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment.
Background
The pedestrian detection technology is to detect a pedestrian target from a video transmitted by a real-time camera by utilizing an image processing technology. The detection usually uses a rectangular frame to determine the position of the pedestrian target, and feeds back the detection image to the user interface in a window form through a real-time video stream. Yolov3 is a third version of the objective detection Yolo series algorithm, and can achieve satisfactory real-time detection speed while maintaining high detection precision. Due to the strong performance, YOLOv3 has become one of the preferred algorithms in the engineering field, and can meet the real-time requirement of industrial application. The YOLOv3 network design strategy continues the core idea of GoogleNet and realizes end-to-end target detection. When single-target recognition is realized, in order to save effort and compression cost, a YOLOv3-tiny network is often used, which is based on YOLOv3 reduced network layers, so that the real-time detection speed is further increased while the requirement is met, less resources are consumed, and the method is suitable for embedded equipment.
However, in the prior art, the visual angle of an industrial robot camera is relatively low, the factory environment is complex and variable, and a high real-time requirement is required, and most of the existing pedestrian detection technologies are applied to outdoor scenes or scenes with relatively high visual angles, so that the lower body or body part of a human body cannot be effectively identified, the environment and the change of the visual angle cannot be effectively coped with, the network structure is relatively complex, the implementation on embedded equipment is difficult, and the performance problem also affects the requirements of the identification rate and the real-time property.
At present, there is also a pedestrian detection method using YOLOv3 network in the prior art, for example, a chinese patent with patent application number "202010123538.7" and name "a multi-target pedestrian detection and tracking method based on YOLOv 3", which includes the following steps: step 1: improved yolov3 sub-network of object detection, object detection being the basic operation based on detection tracking; step 2: establishing a tracker, namely performing data association on a target of a current frame and a tracked target, wherein the tracker needs to be established firstly; and step 3: and data association, namely performing data association on the target of the current frame and the tracking target, and generally fusing motion information of the target and characteristic information of the target. Although it can achieve pedestrian detection, it also has the following problems:
firstly, the YOLOv3 network is adopted, the required hardware cost is higher, and compared with the YOLOv3-tiny network, the number of network layers is larger, and the calculation speed is slower;
secondly, the technology uses a K-means clustering algorithm to generate a preselection frame, and randomly selects K target points as initial clustering centers, so that the clustering randomness is increased, similar categories are mixed, the classification is inaccurate, and the clustering effect of the algorithm is influenced;
thirdly, because the visual angle of the industrial robot is low, the recognition requirement of the recognition model applied to the factory scene on the lower half part of the human body is very high, and the contrast technology does not reflect the advantages or special enhancement of the recognition on the lower half part of the human body;
therefore, a real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment, which is simple in structure, accurate in identification and low in calculation workload, is urgently needed to be provided.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a real-time pedestrian detection method based on improved YOLOv3-tiny in a factory environment, and the technical scheme adopted by the present invention is as follows:
the real-time pedestrian detection method based on the improved YOLOv3-tiny in the factory environment comprises the following steps:
acquiring a pedestrian image in a factory environment, and carrying out additional labeling on the lower half body and the body part of the human body on the extracted image to obtain a training set and a test set;
constructing an improved YOLOv3-tiny network model;
carrying out iterative training on the training set by using the improved YOLOv3-tiny network model to obtain the characteristics corresponding to the training set by learning, and obtaining the trained improved YOLOv3-tiny network model;
and importing the images of the test set into the trained improved YOLOv3-tiny network model to obtain a pedestrian pre-selection frame corresponding to the images of the test set.
Further, the improved YOLOv3-tiny network model screening preselection box comprises the following steps:
regressing categories and frames on a characteristic layer of the improved YOLOv3-tiny network model;
performing convolution on the image to obtain a preselected frame coordinate;
and carrying out non-maximum suppression on the preselected frame coordinates to obtain a screening preselected frame.
Preferably, the number of iterations of the iterative training is 30000.
Further, the target detection of the improved YOLOv3-tiny network model comprises the following steps:
preprocessing a training set and a test set by adopting a leader clustering algorithm to generate a plurality of sample subsets, and sampling the sample subsets;
integrating the clustering result with the mean distance between the sample subsets obtained by calculation by utilizing K-means clustering, and determining the initial position of the center of the preselected frame;
and sending the images of the training set and the test set into an improved YOLOv3-tiny network model for target detection.
Further, the target detection of the improved YOLOv3-tiny network model comprises the following steps:
setting a training set and/or a test set of data set X ═ X (X)1,x2,x3...,xn) The threshold value of the distance from the clustering center is M, and the value of M is the standard deviation of each sample in the K neighbor sample set; the sample xiBelongs to the data set X;
sample xiMapping (i ═ 1,2,3.., l) into the high-dimensional kernel space G through a non-linear mapping θ, said l representing the total number of samples;
performing K-means clustering operation in a high-dimensional kernel space G by adopting an optimized objective function, wherein the expression is as follows:
Figure BDA0002645109080000031
wherein m iskTo representThe average value of the samples is calculated,
Figure BDA0002645109080000032
represents a sample xiMapping samples in a clustering space, wherein k represents a class label in a k-means cluster;
calculating the distance between any two feature points in the nuclear space, wherein the expression is as follows:
Figure BDA0002645109080000033
wherein, theta (x)i) Represents a sample xiMapping samples in the high-dimensional kernel space G, K (x, x)i) And K (x)i,xj) Representing a kernel function,/kRepresenting the data volume of the sample subset under the k category;
and merging the samples, solving the mean value of any sample, and solving the clustering result.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention removes some characteristic layers on the basis of yolov3 and reserves two independent prediction branches. Specifically, the yolov3-tiny has 24 network layers, which are greatly reduced compared with 107 layers of yolo3, and can ensure that the recognition rate of a single object is similar to yolov3, and simultaneously meet the requirement of recognition speed, so that the yolov3-tiny is suitable for real-time recognition projects and is more commonly used in engineering. The yolov3 running on the industrial robot (embedded equipment) can occupy a large amount of computing resources, influence the realization of other functions, and the problem can be solved by using a lighter-weight network;
(2) the K-means clustering algorithm is improved, firstly, a data set is preprocessed by using a leader clustering algorithm to generate a plurality of sample subsets, then, the sample subsets are sampled, finally, kernel K-means clustering operation is carried out, the initial position of an anchor is determined by calculating the mean distance between the sample subsets and integrating the clustering result, and then, the initial position is sent into a YOLOv3-tiny model for target detection; therefore, the phenomena of similar category mixing and inaccurate classification can be solved;
(3) compared with other pedestrian recognition models, the method can more effectively recognize the lower body or other body parts of the human body, and is suitable for being applied to complex scenes;
in conclusion, the invention can realize real-time pedestrian detection aiming at a factory complex environment and a camera with a specific visual angle on the basis of ensuring lower calculation cost and enabling the camera to be built on embedded equipment (an industrial robot); compared with the prior art, the method has the advantages of simple logic, low investment cost, high calculation efficiency and the like, and has high practical value and popularization value in the technical field of pedestrian detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of protection, and it is obvious for those skilled in the art that other related drawings can be obtained according to these drawings without inventive efforts.
FIG. 1 is a diagram of the architecture of the improved YOLOv3-tiny network model of the present invention.
FIG. 2 is a diagram illustrating an application scenario of the present invention.
FIG. 3 is a detection diagram (one) of the present invention.
Fig. 4 is a detection diagram (two) of the present invention.
Detailed Description
To further clarify the objects, technical solutions and advantages of the present application, the present invention will be further described with reference to the accompanying drawings and examples, and embodiments of the present invention include, but are not limited to, the following examples. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Examples
As shown in fig. 1 to 4, the present embodiment provides a real-time pedestrian detection method in a factory environment based on improved YOLOv3-tiny, which includes the following steps:
the method comprises the steps of firstly, acquiring images of pedestrians in a factory environment, and carrying out additional labeling on the extracted images of the lower half body and the body part of a human body to obtain a training set and a testing set.
And secondly, constructing an improved YOLOv3-tiny network model.
Thirdly, performing iterative training on the training set by using the improved YOLOv3-tiny network model to obtain the characteristics corresponding to the training set by learning, and obtaining the trained improved YOLOv3-tiny network model; wherein, the iteration number of the iterative training is 30000 times.
In this embodiment, the improved YOLOv3-tiny network model is shown in fig. 1, which extracts features based on a regression method, directly regresses categories and borders on a feature layer, obtains preselected frame coordinates by convolving an image, performs non-maximum suppression, and screens a preselected frame. The probability that someone falls into the frame is P ═ Pr(Perso | n Obj) e, the pre-selected box initial position is determined by the modified K-means algorithm.
(11) Preprocessing a training set and a test set by adopting a leader clustering algorithm to generate a plurality of sample subsets, and sampling the sample subsets;
(12) integrating the clustering result with the mean distance between the sample subsets obtained by calculation by utilizing K-means clustering, and determining the initial position of the center of the preselected frame;
(13) and sending the images of the training set and the test set into an improved YOLOv3-tiny network model for target detection.
Specifically, the method comprises the following steps:
(101) setting a training set and/or a test set of data set X ═ X (X)1,x2,x3...,xn) The threshold value of the distance from the clustering center is M, and the value of M is the standard deviation of each sample in the K neighbor sample set; the sample xiBelonging to data set X.
(102) Sample xiI is mapped into the high-dimensional kernel space G by a non-linear mapping θ, where l represents the total number of samples.
(103) Performing K-means clustering operation in a high-dimensional kernel space G by adopting an optimized objective function, wherein the expression is as follows:
Figure BDA0002645109080000061
wherein m iskThe mean value of the samples is represented by,
Figure BDA0002645109080000062
represents a sample xiMapping samples in a clustering space, wherein k represents a class label in a k-means cluster.
(104) Calculating the distance between any two feature points in the nuclear space, wherein the expression is as follows:
Figure BDA0002645109080000063
wherein, theta (x)i) Represents a sample xiMapping samples in the high-dimensional kernel space G, K (x, x)i) And K (x)i,xj) Representing a kernel function,/kRepresenting the amount of sample subset data in the k category.
(105) In this embodiment, the distance between any two classes is calculated, two targets with the distance less than a set threshold are merged, and a final clustering result is obtained after merging and fusing the union of the sample subsets. The improved network improves the defect that the traditional network is sensitive to noise and interference points and can cause confusion of the identified target.
And fourthly, importing the images of the test set into the trained improved YOLOv3-tiny network model to obtain a pedestrian pre-selection frame corresponding to the images of the test set. The embodiment fills the blank of the technology for identifying and detecting the lower half of the human body, and has specific and prominent substantive characteristics and remarkable progress compared with the prior art.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.

Claims (5)

1. The real-time pedestrian detection method based on the improved YOLOv3-tiny in the factory environment is characterized by comprising the following steps:
acquiring a pedestrian image in a factory environment, and carrying out additional labeling on the lower half body and the body part of the human body on the extracted image to obtain a training set and a test set;
constructing an improved YOLOv3-tiny network model;
carrying out iterative training on the training set by using the improved YOLOv3-tiny network model to obtain the characteristics corresponding to the training set by learning, and obtaining the trained improved YOLOv3-tiny network model;
and importing the images of the test set into the trained improved YOLOv3-tiny network model to obtain a pedestrian pre-selection frame corresponding to the images of the test set.
2. The method for detecting pedestrians in real time in factory environment based on improved YOLOv3-tiny according to claim 1, wherein the improved YOLOv3-tiny network model screening pre-selection box comprises the following steps:
regressing categories and frames on a characteristic layer of the improved YOLOv3-tiny network model;
performing convolution on the image to obtain a preselected frame coordinate;
and carrying out non-maximum suppression on the preselected frame coordinates to obtain a screening preselected frame.
3. The improved YOLOv3-tiny based real-time pedestrian detection method in factory environment according to claim 1, wherein the number of iterations of the iterative training is 30000.
4. The method for detecting pedestrians in real time in factory environment based on improved YOLOv3-tiny of claim 2, wherein the target detection of the improved YOLOv3-tiny network model comprises the following steps:
preprocessing a training set and a test set by adopting a leader clustering algorithm to generate a plurality of sample subsets, and sampling the sample subsets;
integrating the clustering result with the mean distance between the sample subsets obtained by calculation by utilizing K-means clustering, and determining the initial position of the center of the preselected frame;
and sending the images of the training set and the test set into an improved YOLOv3-tiny network model for target detection.
5. The method for detecting pedestrians in real time in factory environment based on improved YOLOv3-tiny according to claim 4, wherein the target detection of the improved YOLOv3-tiny network model comprises the following steps:
setting a training set and/or a test set of data set X ═ X (X)1,x2,x3...,xn) The threshold value of the distance from the clustering center is M, and the value of M is the standard deviation of each sample in the K neighbor sample set; the sample xiBelongs to the data set X;
sample xiMapping (i ═ 1,2,3.., l) into the high-dimensional kernel space G through a non-linear mapping θ, said l representing the total number of samples;
performing K-means clustering operation in a high-dimensional kernel space G by adopting an optimized objective function, wherein the expression is as follows:
Figure FDA0002645109070000021
wherein m iskThe mean value of the samples is represented by,
Figure FDA0002645109070000022
represents a sample xiMapping samples in a clustering space, wherein k represents a class label in a k-means cluster;
calculating the distance between any two feature points in the nuclear space, wherein the expression is as follows:
Figure FDA0002645109070000023
wherein, theta (x)i) Represents a sample xiMapping samples in the high-dimensional kernel space G, K (x, x)i) And K (x)i,xj) Representing a kernel function,/kRepresenting the data volume of the sample subset under the k category;
and merging the samples, solving the mean value of any sample, and solving the clustering result.
CN202010852184.XA 2020-08-21 2020-08-21 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment Pending CN111950500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852184.XA CN111950500A (en) 2020-08-21 2020-08-21 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852184.XA CN111950500A (en) 2020-08-21 2020-08-21 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment

Publications (1)

Publication Number Publication Date
CN111950500A true CN111950500A (en) 2020-11-17

Family

ID=73359855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852184.XA Pending CN111950500A (en) 2020-08-21 2020-08-21 Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment

Country Status (1)

Country Link
CN (1) CN111950500A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734807A (en) * 2021-01-15 2021-04-30 湖南千盟物联信息技术有限公司 Method for automatically tracking plate blank on continuous casting roller way based on computer vision
CN115222804A (en) * 2022-09-05 2022-10-21 成都睿芯行科技有限公司 Industrial material cage identification and positioning method based on depth camera point cloud data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784278A (en) * 2019-01-17 2019-05-21 上海海事大学 The small and weak moving ship real-time detection method in sea based on deep learning
CN109882019A (en) * 2019-01-17 2019-06-14 同济大学 A kind of automobile power back door open method based on target detection and action recognition
CN111046787A (en) * 2019-12-10 2020-04-21 华侨大学 Pedestrian detection method based on improved YOLO v3 model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784278A (en) * 2019-01-17 2019-05-21 上海海事大学 The small and weak moving ship real-time detection method in sea based on deep learning
CN109882019A (en) * 2019-01-17 2019-06-14 同济大学 A kind of automobile power back door open method based on target detection and action recognition
CN111046787A (en) * 2019-12-10 2020-04-21 华侨大学 Pedestrian detection method based on improved YOLO v3 model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BBUF: "目标检测算法之YOLOv3及YOLOV3-Tiny", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/93809416》 *
杨磊等: "一种智能视频监控系统中的行人检测方法", 《计算机与现代化》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734807A (en) * 2021-01-15 2021-04-30 湖南千盟物联信息技术有限公司 Method for automatically tracking plate blank on continuous casting roller way based on computer vision
CN115222804A (en) * 2022-09-05 2022-10-21 成都睿芯行科技有限公司 Industrial material cage identification and positioning method based on depth camera point cloud data

Similar Documents

Publication Publication Date Title
CN106897670B (en) Express violence sorting identification method based on computer vision
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
Jalal et al. The state-of-the-art in visual object tracking
Boult et al. Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
WO2021031954A1 (en) Object quantity determination method and apparatus, and storage medium and electronic device
CN111161315A (en) Multi-target tracking method and system based on graph neural network
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN111476089B (en) Pedestrian detection method, system and terminal for multi-mode information fusion in image
CN108986142A (en) Shelter target tracking based on the optimization of confidence map peak sidelobe ratio
WO2022199360A1 (en) Moving object positioning method and apparatus, electronic device, and storage medium
CN112068555A (en) Voice control type mobile robot based on semantic SLAM method
CN111950500A (en) Real-time pedestrian detection method based on improved YOLOv3-tiny in factory environment
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN110310305A (en) A kind of method for tracking target and device based on BSSD detection and Kalman filtering
Zhang et al. Visual saliency based object tracking
CN117392484A (en) Model training method, device, equipment and storage medium
CN115359468A (en) Target website identification method, device, equipment and medium
Wang et al. Extraction of main urban roads from high resolution satellite images by machine learning
CN113989920A (en) Athlete behavior quality assessment method based on deep learning
CN114067359A (en) Pedestrian detection method integrating human body key points and attention features of visible parts
CN113781521A (en) Improved YOLO-Deepsort-based bionic robot fish detection and tracking method
Zhang et al. Neural guided visual slam system with Laplacian of Gaussian operator
CN112507940A (en) Skeleton action recognition method based on difference guidance representation learning network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201117