CN111062311A - Pedestrian gesture recognition and interaction method based on depth-level separable convolutional network - Google Patents

Pedestrian gesture recognition and interaction method based on depth-level separable convolutional network Download PDF

Info

Publication number
CN111062311A
CN111062311A CN201911281009.3A CN201911281009A CN111062311A CN 111062311 A CN111062311 A CN 111062311A CN 201911281009 A CN201911281009 A CN 201911281009A CN 111062311 A CN111062311 A CN 111062311A
Authority
CN
China
Prior art keywords
pedestrian
depth
gesture recognition
point
separable convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911281009.3A
Other languages
Chinese (zh)
Other versions
CN111062311B (en
Inventor
秦文虎
张仕超
孙立博
张哲�
平鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201911281009.3A priority Critical patent/CN111062311B/en
Publication of CN111062311A publication Critical patent/CN111062311A/en
Application granted granted Critical
Publication of CN111062311B publication Critical patent/CN111062311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明涉及一种深度级可分离卷积网络的行人手势识别与交互方法,包括:通过安装在车辆上的前视相机系统采集包含行人的图像;将图像输入深度可分离卷积网络,检测行人包围盒,将包围盒区域的图像输入手势识别网络,输出行人区域的特征图。将行人所在区域的图像输入手势识别网络进行手势识别。手势识别网络通过深度级可分离卷积层提取特征,在输出特征图的每个点都预测12个人体关节点信息以及对应的12个偏移向量,最后通过对关节点分类理解行人手势,车辆根据识别到的行人手势,结合手势优先级,采取最保守策略做出决策。本发明使用深度级可分离卷积实现模型,成倍缩小模型规模,可以在智能手机等低功耗移动终端实现检测。

Figure 201911281009

The invention relates to a pedestrian gesture recognition and interaction method with a depth-level separable convolutional network, comprising: collecting images containing pedestrians through a front-view camera system installed on a vehicle; inputting the images into a depth-level separable convolutional network to detect pedestrians Bounding box, input the image of the bounding box area into the gesture recognition network, and output the feature map of the pedestrian area. The image of the pedestrian area is input into the gesture recognition network for gesture recognition. The gesture recognition network extracts features through a depth-level separable convolutional layer, predicts 12 human body joint point information and 12 corresponding offset vectors at each point of the output feature map, and finally understands pedestrian gestures, vehicles and vehicles by classifying joint points. Based on the recognized pedestrian gestures, combined with gesture priorities, the most conservative strategy is adopted to make decisions. The invention uses the depth-level separable convolution to realize the model, reduces the scale of the model exponentially, and can realize detection in low-power mobile terminals such as smart phones.

Figure 201911281009

Description

Pedestrian gesture recognition and interaction method based on depth-level separable convolutional network
Technical Field
The invention relates to a pedestrian gesture recognition and interaction technology based on a depth-level separable convolutional network, and belongs to the technical field of advanced automobile driver assistance.
Background
The driving environment perception function is an important function of advanced driver assistance system adas (advanced driver assistance system). Pedestrians, as an important component in public transportation scenarios, have a significant impact on vehicle driving decisions. Currently, most research is focused on how to drive autonomously driven vehicles efficiently and safely, and there is a lack of research in terms of interaction with pedestrians. Therefore, as an important part of the driving environment perception, there is an urgent need to recognize a pedestrian gesture and perform pedestrian interaction.
Currently, in order to complete the task of recognizing the gesture of a pedestrian, there are two main methods: one method is based on the traditional statistical learning method, and depends on complicated characteristic engineering to obtain the gesture information of the pedestrian; in the other method, a deep learning method is used, image information is extracted by relying on a convolution network, and a proper loss function is designed for feature graph output to train a model, so that the aim of recognizing the gesture of the pedestrian is finally achieved. Although the traditional statistical learning method based on the feature engineering is small in calculated amount and simple and easy to implement, the recognition accuracy is poor due to the fact that the feature engineering is too complex; although the model based on the deep convolutional network has high recognition accuracy, most of the models need high-performance GPUs to achieve the real-time recognition effect.
Chinese patent application publication No. CN107423679A proposes a pedestrian intention detection method and system, the method comprising: arranging a distance sensor to collect target form data in an observation area; acquiring track information of the target based on the existing state information of the target; and judging the action intention of each target according to the movement track and the space information of each target. The method only obtains the prediction of the walking track of the pedestrian, and does not achieve the interaction effect of the pedestrian and the vehicle. In addition, chinese patent application publication No. CN104915628A proposes a pedestrian intention detection model for an automated vehicle, the method including: acquiring basic scene elements of a traffic scene around a pedestrian related to the movement intention of the pedestrian; analyzing a relationship between a state change when the pedestrian walks and each surrounding basic scene element to obtain a relationship between the basic scene element and a pedestrian state change, based on the basic scene element and three-dimensional (3D) distance information of the pedestrian over time; establishing a context correlation model between the pedestrian and all the surrounding basic scene elements by using the obtained relationship; and predicting the next motion state of the pedestrian by using the established context correlation model based on the current scene element which is obtained in real time and is related to the current pedestrian so as to generate the next motion prediction result of the pedestrian. The method also has no interaction process of pedestrians and vehicles, needs to identify more additional scene information and 3D information, is very large in calculation amount, and also does not indicate how to deal with when multiple pedestrians are simultaneously present.
Disclosure of Invention
The technical problem to be solved by the invention is as follows:
the invention provides a pedestrian gesture recognition and interaction method based on a depth-level separable convolutional network, and aims to solve the problems of large model calculation amount, low recognition speed and poor pedestrian and vehicle interactivity in the process of recognizing and interacting pedestrian gestures of an autonomous driving automobile.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides a pedestrian gesture recognition and interaction method based on a depth-level separable convolutional network, which is characterized by comprising the following steps of:
step one, collecting an image containing a pedestrian;
inputting the image into a depth separable convolution network, detecting a pedestrian bounding box, inputting the image of the bounding box region into a gesture recognition network, and outputting a characteristic diagram of the pedestrian region;
step three, calculating joint point coordinates and classifying the joint point coordinates to obtain gesture recognition results;
step four, sorting the priority of the gestures;
and step five, obtaining a final interaction decision of the moving vehicle according to the gesture expression with the prior priority.
As mentioned above, the pedestrian gesture recognition and interaction method based on the depth-level separable convolutional network, further, the depth-level separable convolutional neural network in the second step specifically includes:
step 2.1, deep convolution;
step 2.2, batch normalization;
step 2.3, Relu activation;
step 2.4, point convolution;
step 2.5, batch normalization;
and 2.6, Relu activation.
The pedestrian gesture recognition and interaction method based on the depth-level separable convolutional network is further characterized in that the feature points in the feature map in the step two comprise the probabilities of 12 human body joint points existing at the feature points and the offset vector of each joint point at the point.
The pedestrian gesture recognition and interaction method based on the depth-level separable convolutional network is further characterized in that a depth-level separable convolutional structure reduction model is adopted for joint point classification in the second step.
The pedestrian gesture recognition and interaction method based on the depth-level separable convolutional network as described above, further, the specific step of classifying the joint point in step three includes:
step 3.1, calculating the coordinates of the joint points: finding out the point with the highest confidence coefficient in each characteristic diagram by combining the confidence coefficient of the distribution characteristic diagram of the human body joint points contained in the characteristic points obtained in the step two and the offset vector characteristic diagram of the corresponding point to determine the type of the joint points, and then obtaining the positions of the joint points from the offset vectors so as to obtain the complete information of the human body joint points;
step 3.2, normalization: after obtaining the coordinates of the human body joint points, taking the central point of the connecting line of the left shoulder and the right shoulder as the center, subtracting the coordinates of the central point from all the joint points, and then carrying out normalization processing;
step 3.3, classification: and classifying the normalized data by using a support vector machine or a layer of fully-connected network to obtain a final pedestrian gesture recognition result.
According to the pedestrian gesture recognition and interaction method based on the depth-level separable convolutional network, further, in the fifth step, when a plurality of pedestrians around the vehicle are detected to make different gestures at the same time, action decisions are made by adopting the most conservative strategy according to different priorities of the gestures of the pedestrians. When a plurality of pedestrians appear in front of the vehicle at the same time, the model needs to identify the gestures of the plurality of pedestrians at the same time; after the gesture information of a plurality of pedestrians is obtained, the gestures are sorted according to the priority of the gesture information, and then the most conservative strategy is adopted to respond. For example, if some pedestrians require the vehicle to decelerate, and some pedestrians require the vehicle to stop, the parking strategy is preferentially executed. This ensures traffic safety with maximum probability.
The model updates the pedestrian state in the visual field in time, and when no pedestrian exists in the visual field or the gestures of all pedestrians do not require the vehicle to give way, the vehicle enters a normal driving state.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
because the method is realized based on the depth-level separable convolution model, compared with the traditional deep learning model, the method has the advantages that the scale is reduced by times, the support of special hardware or GPU equipment is not needed, and the application cost is reduced. Meanwhile, the identification precision can be ensured, and the application scene is greatly widened. The technical scheme provided by the invention can realize the real-time recognition of the pedestrian gesture information on low-power-consumption mobile equipment such as a mobile phone. And, after the information is recognized, the vehicle and the pedestrian make effective interaction. In addition, for a scene with a plurality of pedestrians in front of the vehicle, the model can adopt the most conservative strategy to make a decision according to the priority of the pedestrian gesture, and the traffic safety is guaranteed to the maximum extent.
Drawings
FIG. 1 is a schematic diagram of a deep separable convolutional network;
FIG. 2 is a schematic of the process of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a pedestrian gesture recognition and interaction method based on a depth-level separable convolutional network. FIG. 2 is a schematic of the process of the present invention. As shown in fig. 2. The method comprises the following steps:
a front image is first captured by a camera mounted in front of the vehicle. The parameters of video data collected by a forward-looking camera used in the invention are 1280 multiplied by 720@60FPS, video frames are color images and comprise RGB three-channel color information, the color information is expressed by tensor of (1280,720,3) dimensionality, each element in the tensor is an integer, and the value range is [0,255 ].
The image is then input into a depth level separable convolutional neural network to detect pedestrian bounding boxes. The invention utilizes the depth-level separable convolution structure to divide the traditional convolution structure into two steps of depth convolution and point convolution, so that the division can reduce the volume of the model by times on the premise of ensuring the identification effect of the model. Fig. 1 is a schematic diagram of a deep separable convolutional network. As shown in fig. 1, this structure divides the common convolution operation into a deep convolution and a point convolution. The deep convolution adopts different convolution kernels for each input channel, namely one convolution kernel corresponds to one input channel; dot convolution is just a common convolution, except that it uses a 1 × 1 convolution kernel. And (3) extracting a feature map through cascading a plurality of depth-level separable convolution modules, and obtaining a pedestrian bounding box in the feature map.
And then inputting the obtained pedestrian area image into a gesture recognition network. And constructing a feature extraction network of the human body joint points by cascading a plurality of depth-level separable convolution modules. The feature map output by the pedestrian gesture recognition network comprises S multiplied by 36 features, wherein S represents the size of the output feature map, and each feature point is composed of a feature vector containing 36 data. These 36 data contain the probabilities of 12 human body joint points existing at the feature point, and the offset vector of each joint point at that point. And obtaining the coordinates of the joint points of the human body of the pedestrian by combining the probability characteristic diagram and the offset vector diagram.
After the coordinates of the human body joint points are obtained, the central point of the connecting line of the left shoulder and the right shoulder is taken as the center, all the joint points are subtracted from the coordinates of the central point, normalization processing is carried out, and finally, the normalized data are classified by using a support vector machine or a layer of full-connection network, so that the final pedestrian gesture recognition result is obtained.
In the step, the gesture recognition network utilizes a depth-level separable convolution structure simplified model, and finally obtains a gesture classification result by using a support vector machine or a full connection layer.
When a plurality of pedestrians appear in front of the vehicle at the same time, the model needs to identify the gestures of the plurality of pedestrians at the same time; after the gesture information of a plurality of pedestrians is obtained, the gestures are sorted according to the priority of the gesture information, and then the most conservative strategy is adopted to respond. For example, if some pedestrians require the vehicle to decelerate, and some pedestrians require the vehicle to stop, the parking strategy is preferentially executed. This ensures traffic safety with maximum probability.
When no pedestrian is in front of the vehicle or no extra request is made to the vehicle by the pedestrian gesture in the field of view, the vehicle enters a normal driving state.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1.一种基于深度级可分离卷积网络的行人手势识别与交互方法,其特征在于,包含以下步骤:1. a pedestrian gesture recognition and interaction method based on a depth-level separable convolutional network, is characterized in that, comprises the following steps: 步骤一、采集包含行人的图像;Step 1: Collect images containing pedestrians; 步骤二、将图像输入深度可分离卷积网络,检测行人包围盒,将包围盒区域的图像输入手势识别网络,输出行人区域的特征图;Step 2: Input the image into the depthwise separable convolutional network, detect the pedestrian bounding box, input the image of the bounding box area into the gesture recognition network, and output the feature map of the pedestrian area; 步骤三、计算关节点坐标并对关节点坐标分类得到手势识别结果;Step 3: Calculate the joint point coordinates and classify the joint point coordinates to obtain a gesture recognition result; 步骤四、对手势的优先级进行排序;Step 4. Sort the priority of gestures; 步骤五、根据优先级在前的手势表达的示意,得到移动车辆的最终交互决策。Step 5: Obtain the final interaction decision of the moving vehicle according to the gesture expressed by the gesture with the highest priority. 2.根据权利要求1所述的一种基于深度级可分离卷积网络的行人手势识别与交互方法,其特征在于,步骤二所述深度级可分离卷积神经网络具体包括:2. a kind of pedestrian gesture recognition and interaction method based on depth-level separable convolutional network according to claim 1, is characterized in that, described in step 2, depth-level separable convolutional neural network specifically comprises: 步骤2.1、深度卷积;Step 2.1, depth convolution; 步骤2.2、批归一化;Step 2.2, batch normalization; 步骤2.3、Relu激活;Step 2.3, Relu activation; 步骤2.4、点卷积;Step 2.4, point convolution; 步骤2.5、批归一化;Step 2.5, batch normalization; 步骤2.6、Relu激活。Step 2.6, Relu activation. 3.根据权利要求1所述的一种基于深度级可分离卷积网络的行人手势识别与交互方法,其特征在于,步骤二所述特征图中的特征点包含12个人体关节点在该特征点存在的概率以及每个关节点在该点的偏移向量。3. a kind of pedestrian gesture recognition and interaction method based on depth level separable convolutional network according to claim 1, is characterized in that, the feature point in the feature map described in step 2 comprises 12 human body joint points in this feature The probability that the point exists and the offset vector of each joint point at that point. 4.根据权利要求1所述的一种基于深度级可分离卷积网络的行人手势识别与交互方法,其特征在于,步骤二所述对关节点分类采用深度级可分离卷积结构精简模型。4 . The pedestrian gesture recognition and interaction method based on a depth-level separable convolutional network according to claim 1 , wherein the step 2 adopts a depth-level separable convolution structure simplified model for the classification of joint points. 5 . 5.根据权利要求4所述的一种基于深度级可分离卷积网络的行人手势识别与交互方法,其特征在于,步骤三所述对关节点分类的具体步骤包括:5. A kind of pedestrian gesture recognition and interaction method based on depth-level separable convolutional network according to claim 4, is characterized in that, the concrete step of classifying joint points described in step 3 comprises: 步骤3.1、计算关节点坐标:由步骤二得到的特征点中包含的人体关节点分布特征图的置信度,结合对应点的偏移向量特征图,找到每个特征图中置信度最高的点以确定关节点类别,再从偏移向量得到关节点位置,从而得到人体关节点的完整信息;Step 3.1. Calculate the coordinates of the joint points: The confidence of the distribution feature map of human joint points contained in the feature points obtained in step 2, combined with the offset vector feature map of the corresponding point, find the point with the highest confidence in each feature map. Determine the joint point category, and then obtain the joint point position from the offset vector, so as to obtain the complete information of the human joint point; 步骤3.2、归一化:得到人体关节点坐标后,以左右肩连线的中心点为中心,将所有的关节点减去中心点的坐标后,进行归一化处理;Step 3.2. Normalization: After obtaining the coordinates of the joint points of the human body, take the center point of the line connecting the left and right shoulders as the center, and then normalize all the joint points after subtracting the coordinates of the center point; 步骤3.3、分类:将归一化后的数据使用支持向量机或者一层全连接网络进行分类,得到最终的行人手势识别结果。Step 3.3. Classification: Use the support vector machine or a fully connected network to classify the normalized data to obtain the final pedestrian gesture recognition result. 6.根据权利要求1所述的行人手势识别与交互模型,其特征在于,所述步骤五中,当同时检测到车辆周围有多个行人作出不同的手势的时候,根据行人手势的优先级不同,采用最保守策略作出行动决策。6 . The pedestrian gesture recognition and interaction model according to claim 1 , wherein in the step 5, when it is detected that there are multiple pedestrians around the vehicle making different gestures, the priority of the pedestrian gestures is different according to the priority of the pedestrian gestures. 7 . , using the most conservative strategy to make action decisions.
CN201911281009.3A 2019-12-13 2019-12-13 Pedestrian gesture recognition and interaction method based on depth-level separable convolution network Active CN111062311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911281009.3A CN111062311B (en) 2019-12-13 2019-12-13 Pedestrian gesture recognition and interaction method based on depth-level separable convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911281009.3A CN111062311B (en) 2019-12-13 2019-12-13 Pedestrian gesture recognition and interaction method based on depth-level separable convolution network

Publications (2)

Publication Number Publication Date
CN111062311A true CN111062311A (en) 2020-04-24
CN111062311B CN111062311B (en) 2023-05-23

Family

ID=70301176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911281009.3A Active CN111062311B (en) 2019-12-13 2019-12-13 Pedestrian gesture recognition and interaction method based on depth-level separable convolution network

Country Status (1)

Country Link
CN (1) CN111062311B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546824A (en) * 2022-04-18 2022-12-30 荣耀终端有限公司 Taboo image recognition method, device and storage medium
CN117711014A (en) * 2023-07-28 2024-03-15 荣耀终端有限公司 Method and device for identifying space-apart gestures, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117701A (en) * 2018-06-05 2019-01-01 东南大学 Pedestrian's intension recognizing method based on picture scroll product
CN109613930A (en) * 2018-12-21 2019-04-12 中国科学院自动化研究所南京人工智能芯片创新研究院 Control method, device, unmanned vehicle and the storage medium of unmanned vehicle
CN110096968A (en) * 2019-04-10 2019-08-06 西安电子科技大学 A kind of ultrahigh speed static gesture identification method based on depth model optimization
CN110096973A (en) * 2019-04-16 2019-08-06 东南大学 A kind of traffic police's gesture identification method separating convolutional network based on ORB algorithm and depth level

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117701A (en) * 2018-06-05 2019-01-01 东南大学 Pedestrian's intension recognizing method based on picture scroll product
CN109613930A (en) * 2018-12-21 2019-04-12 中国科学院自动化研究所南京人工智能芯片创新研究院 Control method, device, unmanned vehicle and the storage medium of unmanned vehicle
CN110096968A (en) * 2019-04-10 2019-08-06 西安电子科技大学 A kind of ultrahigh speed static gesture identification method based on depth model optimization
CN110096973A (en) * 2019-04-16 2019-08-06 东南大学 A kind of traffic police's gesture identification method separating convolutional network based on ORB algorithm and depth level

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHICHAO ZHANG 等: "One For All: A Mutual Enhancement Method for", 《MDPI》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546824A (en) * 2022-04-18 2022-12-30 荣耀终端有限公司 Taboo image recognition method, device and storage medium
CN115546824B (en) * 2022-04-18 2023-11-28 荣耀终端有限公司 Taboo picture identification methods, equipment and storage media
CN117711014A (en) * 2023-07-28 2024-03-15 荣耀终端有限公司 Method and device for identifying space-apart gestures, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN111062311B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
WO2021218786A1 (en) Data processing system, object detection method and apparatus thereof
Nguyen et al. Learning framework for robust obstacle detection, recognition, and tracking
Hoang et al. Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning
EP2601615B1 (en) Gesture recognition system for tv control
CN112487862B (en) Garage pedestrian detection method based on improved EfficientDet model
CN111292366B (en) Visual driving ranging algorithm based on deep learning and edge calculation
CN111797657A (en) Vehicle surrounding obstacle detection method, device, storage medium and electronic device
CN111027505B (en) Hierarchical multi-target tracking method based on significance detection
JP2016062610A (en) Feature model generation method and feature model generation device
CN113378641B (en) Gesture recognition method based on deep neural network and attention mechanism
CN110378243A (en) A kind of pedestrian detection method and device
Dewangan et al. Towards the design of vision-based intelligent vehicle system: methodologies and challenges
CN110222718A (en) The method and device of image procossing
CN114764856A (en) Image semantic segmentation method and image semantic segmentation device
Tran et al. Enhancement of robustness in object detection module for advanced driver assistance systems
WO2024093321A1 (en) Vehicle position acquiring method, model training method, and related device
CN110249366A (en) Image feature amount output device, pattern recognition device, image feature amount output program and image recognition program
Sun et al. Semantic-aware 3D-voxel CenterNet for point cloud object detection
CN111062311B (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolution network
CN113723170A (en) Integrated hazard detection architecture system and method
CN115082869A (en) A vehicle-road collaborative multi-target detection method and system for special vehicles
CN113569803A (en) Multi-mode data fusion lane target detection method and system based on multi-scale convolution
Yahya et al. Object detection and recognition in autonomous vehicles using fast region-convolutional neural network
CN113191324A (en) Pedestrian behavior intention prediction method based on multi-task learning
CN116434173B (en) Road image detection method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant