CN113221612A - Visual intelligent pedestrian monitoring system and method based on Internet of things - Google Patents
Visual intelligent pedestrian monitoring system and method based on Internet of things Download PDFInfo
- Publication number
- CN113221612A CN113221612A CN202011372747.1A CN202011372747A CN113221612A CN 113221612 A CN113221612 A CN 113221612A CN 202011372747 A CN202011372747 A CN 202011372747A CN 113221612 A CN113221612 A CN 113221612A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- pedestrians
- camera
- internet
- things
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000000007 visual effect Effects 0.000 title claims abstract description 15
- 238000004891 communication Methods 0.000 claims abstract description 17
- 230000006854 communication Effects 0.000 claims abstract description 17
- 230000003993 interaction Effects 0.000 claims abstract description 13
- 230000005540 biological transmission Effects 0.000 claims abstract description 5
- 238000012790 confirmation Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 claims description 4
- 230000007175 bidirectional communication Effects 0.000 claims description 3
- 238000003066 decision tree Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 206010063385 Intellectualisation Diseases 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 238000011160 research Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 description 2
- 229960004134 propofol Drugs 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/10—Detection; Monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Biodiversity & Conservation Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a visual intelligent pedestrian monitoring system and method based on the Internet of things.A first camera identifies non-database pedestrians, namely unknown pedestrians, in a monitoring video; detecting the total number of pedestrians in the monitoring video by the second camera; the information interaction terminal, the gateway and the server realize storage and transmission of pedestrian information; the user terminal is used for a user to acquire real-time pedestrian information or send an instruction to the server; the wireless communication device is connected with the gateway, and after the instruction sent by the Internet of things receiving terminal is sent, the instruction is sent to the wireless communication device through the gateway of the successive server, so that the function of alarming or pedestrian information and purpose confirmation through communication with the pedestrian is achieved in real time. The method has the advantages of intellectualization, real-time performance and low cost.
Description
Technical Field
The invention belongs to the technical field of monitoring of the Internet of things, and particularly relates to a visual intelligent pedestrian monitoring system and method based on the Internet of things.
Background
The internet of things is the third revolution of the information technology industry. The internet of things refers to the fact that any object is connected with a network through information sensing equipment according to an agreed protocol, information exchange and communication are conducted on the object through an information transmission medium, so that functions of intelligent identification, positioning, tracking, supervision and the like are achieved, and the internet of things technology represents the future development direction of the internet.
The concept of deep learning was proposed in 2006 by Hinton et al, and originated from the study of artificial neural networks: interconnection between neurons. Deep learning is widely applied to other fields such as computer vision, speech recognition, natural language processing and the like by combining low-level features to form more abstract high-level representation attribute categories or features so as to find distributed feature representations of data.
With the improvement of the performance of computer hardware, the pedestrian recognition technology based on the deep neural network has the interests of researchers and scholars again, and the research of deep learning also becomes the research hotspot of the computer vision at present. The research relates to a plurality of research fields such as image processing, computer vision, machine learning, image retrieval and the like, has important scientific significance, can be widely applied to the field of computer application, such as intelligent security, security and the like, and has good application prospect.
In the age of self-renewal and self-upgrade of information, networks, wireless mobile communication technologies and network technologies at an alarming rate, 5G (fifth generation mobile communication technology), which is the latest generation cellular mobile communication technology, represents the latest development direction of wireless mobile communication technologies, and has been widely researched, developed and put into practical use in recent years, regardless of scientific research or practical application.
Disclosure of Invention
The invention aims to solve the technical problem of providing a visual intelligent pedestrian monitoring system and method based on the Internet of things, aiming at the defects of the prior art.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a visual intelligent pedestrian monitoring system based on the Internet of things comprises a first camera, a second camera, an information interaction end, a gateway, a server, a user terminal and a wireless communication device;
the first camera identifies non-database pedestrians, namely unknown pedestrians, in the monitoring video;
detecting the total number of pedestrians in the monitoring video by the second camera;
the information interaction terminal, the gateway and the server realize storage and transmission of pedestrian information;
the user terminal is used for a user to acquire real-time pedestrian information or send an instruction to the server;
the wireless communication device is connected with the gateway, and after the instruction sent by the Internet of things receiving terminal is sent, the instruction is sent to the wireless communication device through the gateway of the successive server, so that the function of alarming or pedestrian information and purpose confirmation through communication with the pedestrian is achieved in real time.
In order to optimize the technical scheme, the specific measures adopted further comprise:
the information interaction terminal is communicated with the gateway through ZigBee or mMTC.
The gateway adopts WiFi to perform handshake connection with the server and adopts an EDP protocol to perform bidirectional communication.
A visual intelligent pedestrian monitoring method based on the Internet of things comprises the following steps:
step 1, a first camera identifies a person image in a monitoring video, and compares the identified person image with a pedestrian photo database to obtain a non-database pedestrian;
step 2, detecting the total number of pedestrians in the monitoring video by a second camera;
step 3, calculating the number of non-database pedestrians, namely unknown pedestrians;
and 4, transmitting the unknown pedestrian number and the unknown pedestrian image to the user terminal through the information interaction terminal, the gateway and the server in sequence.
The second camera in the step 2 detects the total number of pedestrians in the monitoring video, and specifically includes:
the second camera scans the number of pedestrians in the monitoring video by using a Faster R-CNN algorithm.
The second camera scans the number of pedestrians in the monitoring video by using a Faster R-CNN algorithm, namely, the pedestrian re-identification comprises the following steps:
step 21, inputting an image;
step 22, generating a candidate region through the region generation network RPN;
step 23, extracting features;
step 24, classifying by a classifier;
and 25, returning by the regressor and adjusting the position.
The first camera in the step 1 identifies the person image in the monitoring video, and compares the identified person image with the pedestrian photo database to obtain the non-database pedestrian, which specifically comprises:
and training a pedestrian photo database by adopting a machine learning decision tree algorithm, and then identifying the faces of pedestrians in the monitoring video to obtain non-database pedestrians, namely unknown pedestrians.
The invention has the following beneficial effects:
1. the intelligent monitoring is realized, the artificial real-time monitoring is not needed, and the intelligent monitoring system can be used continuously for 24 hours.
2. The real-time property can be directly realized by re-identifying pedestrians through videos collected by the camera, then information is sent to the gateway of the Internet of things, the protocol can use the traditional ZigBee or conform to the trend of 5G, and the mMTC is used.
3. The cost is low, and basically no extra expenditure is needed except for the price of the camera.
4. And improving a neural network algorithm, and using the Faster R-CNN for pedestrian re-identification.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
fig. 2 is a schematic diagram of a pedestrian re-identification process.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
The invention discloses a visual intelligent pedestrian monitoring system based on the Internet of things, which comprises a first camera, a second camera, an information interaction end, a gateway, a server, a user terminal and a wireless communication device, wherein the first camera is connected with the second camera;
the first camera identifies non-database pedestrians, namely unknown pedestrians, in the monitoring video;
detecting the total number of pedestrians in the monitoring video by the second camera;
the information interaction terminal, the gateway and the server realize storage and transmission of pedestrian information;
the user terminal is used for a user to acquire real-time pedestrian information or send an instruction to the server;
the wireless communication device is connected with the gateway, and after the instruction sent by the Internet of things receiving terminal is sent, the instruction is sent to the wireless communication device through the gateway of the successive server, so that the function of alarming or pedestrian information and purpose confirmation through communication with the pedestrian is achieved in real time.
In the embodiment, the information interaction terminal communicates with the gateway through ZigBee or mMTC.
In the embodiment, the gateway adopts WiFi to perform handshake connection with the server and adopts an EDP protocol to perform bidirectional communication.
Referring to fig. 1, a visual intelligent pedestrian monitoring method based on internet of things includes:
step 1, a first camera identifies a person image in a monitoring video, and compares the identified person image with a pedestrian photo database to obtain a non-database pedestrian;
step 2, detecting the total number of pedestrians in the monitoring video by a second camera;
step 3, calculating the number of non-database pedestrians, namely unknown pedestrians;
and 4, transmitting the unknown pedestrian number and the unknown pedestrian image to the user terminal through the information interaction terminal, the gateway and the server in sequence.
In an embodiment, the first camera in step 1 identifies a person image in the surveillance video, and compares the identified person image with a pedestrian photo database to obtain a non-database pedestrian, specifically:
and training a pedestrian photo database by adopting a machine learning decision tree algorithm, and then identifying the faces of pedestrians in the monitoring video to obtain non-database pedestrians, namely unknown pedestrians. (pedestrian face recognition)
In an embodiment, the second camera in step 2 detects the total number of pedestrians in the monitoring video, specifically:
referring to fig. 2, the second camera scans the number of pedestrians in the surveillance video by using the fast R-CNN algorithm, which includes the following steps:
the four steps are all given to a deep neural network and are all operated on a GPU, so that the operation efficiency is greatly improved; the fast RCNN can be said to consist of two modules: a region generation network RPN candidate frame extraction module + Fast RCNN detection module; the RPN is a full convolutional neural network, and its inside is different from a general convolutional neural network in that a full link layer in the CNN is changed into a convolutional layer. Fast RCNN is based on RPN-extracted propofol detection and identification of targets in propofol; the specific process can be roughly summarized as follows: 1. an image is input. 2. Candidate regions are generated by the region generation network RPN. 3. And (5) extracting features. 4. And classifying by a classifier. 5. The regressor regresses and adjusts the position.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.
Claims (7)
1. A visual intelligent pedestrian monitoring system based on the Internet of things is characterized by comprising a first camera, a second camera, an information interaction end, a gateway, a server, a user terminal and a wireless communication device;
the first camera identifies non-database pedestrians, namely unknown pedestrians, in the monitoring video;
detecting the total number of pedestrians in the monitoring video by the second camera;
the information interaction terminal, the gateway and the server realize storage and transmission of pedestrian information;
the user terminal is used for a user to acquire real-time pedestrian information or send an instruction to the server;
the wireless communication device is connected with the gateway, and after the instruction sent by the Internet of things receiving terminal is sent, the instruction is sent to the wireless communication device through the gateway of the successive server, so that the function of alarming or pedestrian information and purpose confirmation through communication with the pedestrian is achieved in real time.
2. The visual intelligent pedestrian monitoring system based on the Internet of things of claim 1, wherein the information interaction terminal is in communication with a gateway through ZigBee or mMTC.
3. The visual intelligent pedestrian monitoring system based on the Internet of things of claim 1, wherein the gateway adopts WiFi to perform handshaking connection with the server and adopts EDP protocol to perform bidirectional communication.
4. The pedestrian monitoring method of the visual intelligent pedestrian monitoring system based on the Internet of things as claimed in any one of claims 1 to 3, wherein the method comprises the following steps:
step 1, a first camera identifies a person image in a monitoring video, and compares the identified person image with a pedestrian photo database to obtain a non-database pedestrian;
step 2, detecting the total number of pedestrians in the monitoring video by a second camera;
step 3, calculating the number of non-database pedestrians, namely unknown pedestrians;
and 4, transmitting the unknown pedestrian number and the unknown pedestrian image to the user terminal through the information interaction terminal, the gateway and the server in sequence.
5. The visual intelligent pedestrian monitoring method based on the internet of things as claimed in claim 4, wherein the step 2 of detecting the total number of pedestrians in the monitoring video by the second camera specifically comprises:
the second camera scans the number of pedestrians in the monitoring video by using a Faster R-CNN algorithm.
6. The visual intelligent pedestrian monitoring method based on the internet of things as claimed in claim 5, wherein the second camera scans the number of pedestrians in the monitored video by using the fast R-CNN algorithm, namely the pedestrian re-identification, and comprises the following steps:
step 21, inputting an image;
step 22, generating a candidate region through the region generation network RPN;
step 23, extracting features;
step 24, classifying by a classifier;
and 25, returning by the regressor and adjusting the position.
7. The method as claimed in claim 4, wherein the step 1 of identifying the person image in the surveillance video by the first camera, comparing the identified person image with a pedestrian photo database to obtain the non-database pedestrian, specifically:
and training a pedestrian photo database by adopting a machine learning decision tree algorithm, and then identifying the faces of pedestrians in the monitoring video to obtain non-database pedestrians, namely unknown pedestrians.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011372747.1A CN113221612A (en) | 2020-11-30 | 2020-11-30 | Visual intelligent pedestrian monitoring system and method based on Internet of things |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011372747.1A CN113221612A (en) | 2020-11-30 | 2020-11-30 | Visual intelligent pedestrian monitoring system and method based on Internet of things |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113221612A true CN113221612A (en) | 2021-08-06 |
Family
ID=77085783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011372747.1A Pending CN113221612A (en) | 2020-11-30 | 2020-11-30 | Visual intelligent pedestrian monitoring system and method based on Internet of things |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113221612A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007081519A2 (en) * | 2005-12-30 | 2007-07-19 | Steven Kays | Genius adaptive design |
CN107229894A (en) * | 2016-03-24 | 2017-10-03 | 上海宝信软件股份有限公司 | Intelligent video monitoring method and system based on computer vision analysis technology |
CN107798878A (en) * | 2017-11-29 | 2018-03-13 | 合肥寰景信息技术有限公司 | Traffic safety system for prompting based on face recognition |
CN109740577A (en) * | 2019-02-28 | 2019-05-10 | 南京信息工程大学 | A kind of real-time face based on raspberry pie identifies camera system and its adjustment method again |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN109858388A (en) * | 2019-01-09 | 2019-06-07 | 武汉中联智诚科技有限公司 | A kind of intelligent tourism management system |
CN109902573A (en) * | 2019-01-24 | 2019-06-18 | 中国矿业大学 | Multiple-camera towards video monitoring under mine is without mark pedestrian's recognition methods again |
CN109934176A (en) * | 2019-03-15 | 2019-06-25 | 艾特城信息科技有限公司 | Pedestrian's identifying system, recognition methods and computer readable storage medium |
CN110009210A (en) * | 2019-03-26 | 2019-07-12 | 北京师范大学珠海分校 | A kind of student based on attention rate and focus listens to the teacher level comprehensive appraisal procedure |
CN111823252A (en) * | 2020-07-10 | 2020-10-27 | 上海迪勤智能科技有限公司 | Intelligent robot system |
-
2020
- 2020-11-30 CN CN202011372747.1A patent/CN113221612A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007081519A2 (en) * | 2005-12-30 | 2007-07-19 | Steven Kays | Genius adaptive design |
CN107229894A (en) * | 2016-03-24 | 2017-10-03 | 上海宝信软件股份有限公司 | Intelligent video monitoring method and system based on computer vision analysis technology |
CN107798878A (en) * | 2017-11-29 | 2018-03-13 | 合肥寰景信息技术有限公司 | Traffic safety system for prompting based on face recognition |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN109858388A (en) * | 2019-01-09 | 2019-06-07 | 武汉中联智诚科技有限公司 | A kind of intelligent tourism management system |
CN109902573A (en) * | 2019-01-24 | 2019-06-18 | 中国矿业大学 | Multiple-camera towards video monitoring under mine is without mark pedestrian's recognition methods again |
CN109740577A (en) * | 2019-02-28 | 2019-05-10 | 南京信息工程大学 | A kind of real-time face based on raspberry pie identifies camera system and its adjustment method again |
CN109934176A (en) * | 2019-03-15 | 2019-06-25 | 艾特城信息科技有限公司 | Pedestrian's identifying system, recognition methods and computer readable storage medium |
CN110009210A (en) * | 2019-03-26 | 2019-07-12 | 北京师范大学珠海分校 | A kind of student based on attention rate and focus listens to the teacher level comprehensive appraisal procedure |
CN111823252A (en) * | 2020-07-10 | 2020-10-27 | 上海迪勤智能科技有限公司 | Intelligent robot system |
Non-Patent Citations (4)
Title |
---|
任飞等: ""基于改进进化神经网络的双目视觉系统标定"", 《电光与控制》, 2 June 2020 (2020-06-02) * |
王海起;李建;刘香斌;陈海波;: "基于视频的校园教室空闲率分析系统", 地理信息世界, no. 06, 25 December 2019 (2019-12-25) * |
胡鹏: ""基于单目摄像头的密集人群跨线计数及其嵌入式系统的实现"", 《中国硕士学位论文全文数据库》, 15 August 2020 (2020-08-15) * |
黄凯: ""多目标行人检测与追踪算法研究及其在视频监控平台的应用"", 《中国硕士学位论文全文数据库》, 15 March 2020 (2020-03-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829443B (en) | Video behavior identification method based on image enhancement and 3D convolution neural network | |
CN111639544B (en) | Expression recognition method based on multi-branch cross-connection convolutional neural network | |
WO2021103868A1 (en) | Method for structuring pedestrian information, device, apparatus and storage medium | |
CN106845415B (en) | Pedestrian fine identification method and device based on deep learning | |
CN112836675B (en) | Unsupervised pedestrian re-identification method and system for generating pseudo tags based on clusters | |
CN114550053A (en) | Traffic accident responsibility determination method, device, computer equipment and storage medium | |
CN110334577B (en) | Face recognition method based on Haisi security chip | |
CN116092119A (en) | Human behavior recognition system based on multidimensional feature fusion and working method thereof | |
Yin | Object Detection Based on Deep Learning: A Brief Review | |
CN113761995A (en) | Cross-mode pedestrian re-identification method based on double-transformation alignment and blocking | |
CN113687610B (en) | Method for protecting terminal information of GAN-CNN power monitoring system | |
CN105956604B (en) | Action identification method based on two-layer space-time neighborhood characteristics | |
Fung-Lung et al. | An image acquisition method for face recognition and implementation of an automatic attendance system for events | |
Zhang | [Retracted] Sports Action Recognition Based on Particle Swarm Optimization Neural Networks | |
Daogang et al. | Anomaly identification of critical power plant facilities based on YOLOX-CBAM | |
CN113221612A (en) | Visual intelligent pedestrian monitoring system and method based on Internet of things | |
Yang et al. | Heterogeneous face detection based on multi‐task cascaded convolutional neural network | |
Peng et al. | [Retracted] Helmet Wearing Recognition of Construction Workers Using Convolutional Neural Network | |
CN116433645A (en) | Belt bulge detection method and detection system | |
Hochuli et al. | Deep Single Models vs. Ensembles: Insights for a Fast Deployment of Parking Monitoring Systems | |
CN117831131B (en) | Compression method of typical violation intelligent recognition algorithm based on convolutional neural network | |
Sheng et al. | A YOLOX-Based Detection Method of Triple-Cascade Feature Level Fusion for Power System External Defects | |
CN110738692A (en) | spark cluster-based intelligent video identification method | |
CN219340740U (en) | Belt bulge detecting system | |
CN116994104B (en) | Zero sample identification method and system based on tensor fusion and contrast learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |