CN111144231B - Self-service channel anti-trailing detection method and system based on depth image - Google Patents
Self-service channel anti-trailing detection method and system based on depth image Download PDFInfo
- Publication number
- CN111144231B CN111144231B CN201911249118.7A CN201911249118A CN111144231B CN 111144231 B CN111144231 B CN 111144231B CN 201911249118 A CN201911249118 A CN 201911249118A CN 111144231 B CN111144231 B CN 111144231B
- Authority
- CN
- China
- Prior art keywords
- sub
- point cloud
- pedestrians
- pedestrian
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a self-service channel anti-trailing detection method based on a depth image, which comprises the following steps: s1, setting a distance sensor to collect depth data in the detection area and acquiring a complete depth image of the self-service channel; s2, dividing the detection area into a plurality of sub-areas according to the channel position, and analyzing the space state information of the pedestrians in each sub-area; and S3, determining the number of pedestrians in the detection area according to the space state information of the pedestrians in the sub-area, and judging the trailing condition of the pedestrians in the passage. The invention provides a method for analyzing, recording and judging pedestrian space state information in a self-service passage, which can be applied to the field of self-service passage trailing detection of departments such as side inspection, rail transit and the like.
Description
Technical Field
The invention relates to pedestrian detection and point cloud intelligent analysis technologies, in particular to a self-service channel anti-trailing detection method and system based on a depth image.
Background
The following is specifically referred to herein as following the act of holding a legitimate authorized person to enter the gate passageway. The occurrence of trailing behaviors can cause great safety problems for departments such as side detection and high-speed rail, and the traditional trailing prevention detection utilizes an infrared technology, so that the problems of height error report, more missed reports and no visual evidence retention exist.
Disclosure of Invention
In order to solve the problems in the prior art and improve the accuracy of follow-up prevention detection, the invention provides a self-service channel follow-up prevention detection method based on a depth image, which comprises the following steps:
s1, setting a distance sensor to collect depth data in the detection area;
s2, dividing the detection area into a plurality of sub-areas according to the channel position, and analyzing the space state information of the pedestrians in each sub-area;
and S3, determining the number of pedestrians in the detection area according to the space state information of the pedestrians in the sub-area, and judging the trailing condition of the pedestrians in the passage.
The invention provides a self-service channel trailing detection system based on a depth image, which comprises a processor stored with a computer program, wherein when the computer program is executed, the following steps are realized:
s1, setting a distance sensor to collect depth data in the detection area and acquiring a complete depth image of the self-service channel;
s2, dividing the detection area into a plurality of sub-areas according to the channel position, and analyzing the space state information of the pedestrians in each sub-area;
and S3, determining the number of pedestrians in the detection area according to the space state information of the pedestrians in the sub-area, and judging the trailing condition of the pedestrians in the passage.
The method and the system can automatically identify the pedestrians and objects in the self-service passage and judge whether the pedestrians follow the self-service passage. The invention can be applied to the trailing detection of self-service channels such as public security frontier defense, rail transit and the like, reduces the cost of manual supervision and improves the intelligent management level. The method comprises the steps that a distance sensor deployed above a self-service channel collects and acquires depth images of pedestrians and articles in a channel area, the depth images are converted into space structured data in real time, information such as positions and quantity of the pedestrians and the articles in the channel is automatically detected by an intelligent algorithm, trailing conditions are intelligently judged, warning information is sent to the trailing behaviors of the pedestrians, and efficient and orderly operation of the self-service channel is guaranteed. The following problems are solved: abnormality detection: automatically identifying the behavior of two or more persons passing customs in the channel area at the same time, and preventing the following missing report; object recognition: luggage such as draw-bar boxes, backpacks can be intelligently distinguished to avoid misidentification to cause misinformation for pedestrians.
Drawings
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Detailed Description
Embodiments of the present invention are described below with reference to the drawings.
As shown in fig. 1, the flow of one embodiment of the present invention is:
and S1, setting a distance sensor to collect depth data in the detection area and acquiring a complete depth image of the self-service channel. The Kinect or other types of surface scanning distance sensors can be used and erected above the self-service channel, and the distance sensors scan the complete channel area in a non-visible light mode, so that the complete depth image of the self-service channel is obtained.
S2, dividing the detection area into a plurality of sub-areas according to the channel position, and analyzing the space state information of the pedestrians in each sub-area, wherein the space state information comprises: the height, the position and the number of pedestrians of the pedestrian. Specifically, in S2, the method includes the steps of:
s21, coordinate conversion: in order to convert the original coordinate space into a coordinate space with the ground as a plane, a plane data point in the original point cloud data space, such as a ground data point, is selected, and the ground data point is converted into the same plane data point, so that the distance between the ground data point and the plane is minimum, and a conversion parameter is obtained.
S22, dividing sub-regions: converting the depth image subjected to coordinate conversion in the S21 into point cloud data, performing ground registration on the point cloud data to obtain a relative position relationship between each point cloud, calculating the distance between each point cloud and the divided sub-region, judging whether the point cloud is in the divided sub-region, removing the point cloud outside the sub-region, and reserving the point cloud inside the sub-region, so as to distinguish the point clouds inside and outside the sub-region and achieve the purpose of dividing the sub-region.
S23, obtaining space state information: and performing cluster analysis on the point cloud in the reserved sub-region, if a pedestrian exists in the sub-region, obtaining a point cloud cluster result for the pedestrian, wherein the cluster result comprises a cluster center positioned at the head position of the pedestrian, the height of the point cloud corresponding to the cluster center relative to the ground is the height of the pedestrian, the position of the point cloud relative to the sub-region is the position of the pedestrian, and the number of the pedestrians can be obtained according to the number of the cluster centers positioned at the head position of the pedestrian.
S24, obtaining candidate target information by using a clustering method for the point cloud data in each subregion, wherein the information comprises: height, location and number of targets. The adopted clustering method can be a Mean-shift clustering method (see Comaniciu, D., & Meer, P. (1999). Mean shift analysis and applications. IEEE int.conf.com.vis, 2,1197.), and the point cloud data are clustered to obtain the candidate pedestrian target information.
And S25, identifying the candidate target acquired in the S24 by utilizing a deep learning technology so as to distinguish the pedestrian from the article. The target identification method comprises the following steps: establishing a convolutional neural network model, training a deep learning model for target recognition by using a pedestrian sample, and recognizing candidate targets by using the trained deep learning model to distinguish pedestrians from articles.
And S3, integrating the analysis results of S2, and judging the number of pedestrians in each channel area to confirm whether the pedestrian trailing condition occurs. Specifically, the number of pedestrians in each channel area is obtained, and if the number of pedestrians in each channel area is greater than or equal to two, it is determined that pedestrian trailing occurs; if the number of pedestrians in the passage is less than two, the situation is not determined to be trailing.
And S4, if the pedestrian trailing condition is judged to occur, automatically sending alarm information.
According to another aspect of the invention, a depth image based self-service aisle anti-trail detection system is proposed, comprising a distance sensor and being implemented by a computer program, by which system.
The invention has been actually measured in a certain side inspection channel, can detect pedestrian clearance trailing situation in real time, and intelligently identify pedestrian and article information. In various complex customs situations, the method achieves good detection precision.
The above description is only a preferred embodiment of the present invention, and it should be noted that general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.
Claims (10)
1. A self-service channel trailing detection method based on a depth image is characterized by comprising the following steps:
s1, setting a distance sensor to collect depth data in the detection area and acquiring a complete depth image of the self-service channel;
s2, dividing the detection area into a plurality of sub-areas according to the channel position, performing cluster analysis on the point cloud in the sub-areas, and analyzing the spatial state information of the pedestrians in each sub-area, wherein the spatial state information comprises: the height, the position and the number of pedestrians of the pedestrian; carrying out clustering analysis on the point cloud in the sub-region, and obtaining candidate target information by using a clustering method; identifying candidate targets by utilizing a deep learning technology;
and S3, determining the number of pedestrians in the detection area according to the space state information of the pedestrians in the sub-area, and judging the trailing condition of the pedestrians in the passage.
2. The method according to claim 1, wherein step S2 includes:
s21, coordinate conversion: converting the original coordinate space of the depth image into a coordinate space taking the ground as a plane;
s22, dividing sub-regions: converting the depth image subjected to coordinate conversion in the S21 into point cloud data, performing ground registration on the point cloud data to obtain a relative position relationship between each point cloud, calculating the distance between each point cloud and a divided sub-region, judging whether the point cloud is in the divided sub-region, removing the point cloud outside the sub-region, and reserving the point cloud inside the sub-region;
s23, obtaining space state information: performing cluster analysis on the point cloud in the reserved sub-region;
s24, obtaining candidate target information for the point cloud data in each sub-area by using a clustering method;
and S25, identifying the candidate target acquired in the S24 by utilizing a deep learning technology so as to distinguish the pedestrian from the article.
3. The method of claim 2,
in S23, if there is a pedestrian in the sub-region, a point cloud clustering result for the pedestrian is obtained, where the clustering result includes a clustering center located at the head of the pedestrian, the height of the point cloud corresponding to the clustering center relative to the ground is the height of the pedestrian, the position of the point cloud relative to the sub-region is the position of the pedestrian, and the number of pedestrians is obtained according to the number of the clustering centers located at the head of the pedestrian.
4. The method of claim 3, wherein, in S25,
establishing a convolutional neural network model, training a deep learning model for target recognition by using a pedestrian sample, and recognizing candidate targets by using the trained deep learning model to distinguish pedestrians from articles.
5. The method of claim 3, wherein, in S3,
acquiring the number of pedestrians in each channel area, and if the number of pedestrians in each channel area is greater than or equal to two, determining that the pedestrian trails; if the number of pedestrians in the passage is less than two, the situation is not determined to be trailing.
6. A self-service channel trailing detection system based on a depth image is characterized by comprising a processor stored with a computer program, and when the computer program is executed, the following steps are realized:
s1, setting a distance sensor to collect depth data in the detection area and acquiring a complete depth image of the self-service channel;
s2, dividing the detection area into a plurality of sub-areas according to the channel position, performing cluster analysis on the point cloud in the sub-areas, and analyzing the spatial state information of the pedestrians in each sub-area, wherein the spatial state information comprises: the height, the position and the number of pedestrians of the pedestrian; carrying out clustering analysis on the point cloud in the sub-region, and obtaining candidate target information by using a clustering method; identifying candidate targets by utilizing a deep learning technology;
and S3, determining the number of pedestrians in the detection area according to the space state information of the pedestrians in the sub-area, and judging the trailing condition of the pedestrians in the passage.
7. The system of claim 6,
s21, coordinate conversion: converting the original coordinate space of the depth image into a coordinate space taking the ground as a plane;
s22, dividing sub-regions: converting the depth image subjected to coordinate conversion in the S21 into point cloud data, performing ground registration on the point cloud data to obtain a relative position relationship between each point cloud, calculating the distance between each point cloud and a divided sub-region, judging whether the point cloud is in the divided sub-region, removing the point cloud outside the sub-region, and reserving the point cloud inside the sub-region;
s23, obtaining space state information: performing cluster analysis on the point cloud in the reserved sub-region;
s24, obtaining candidate target information for the point cloud data in each sub-area by using a clustering method;
and S25, identifying the candidate target acquired in the S24 by utilizing a deep learning technology so as to distinguish the pedestrian from the article.
8. The system of claim 7, wherein, in S23,
in S23, if there is a pedestrian in the sub-region, a point cloud clustering result for the pedestrian is obtained, where the clustering result includes a clustering center located at the head of the pedestrian, the height of the point cloud corresponding to the clustering center relative to the ground is the height of the pedestrian, the position of the point cloud relative to the sub-region is the position of the pedestrian, and the number of pedestrians is obtained according to the number of the clustering centers located at the head of the pedestrian.
9. The system of claim 7, wherein, in S25,
establishing a convolutional neural network model, training a deep learning model for target recognition by using a pedestrian sample, and recognizing candidate targets by using the trained deep learning model to distinguish pedestrians from articles.
10. The system of claim 7, wherein, in S3,
acquiring the number of pedestrians in each channel area, and if the number of pedestrians in each channel area is greater than or equal to two, determining that the pedestrian trails; if the number of pedestrians in the passage is less than two, the situation is not determined to be trailing.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911249118.7A CN111144231B (en) | 2019-12-09 | 2019-12-09 | Self-service channel anti-trailing detection method and system based on depth image |
PCT/CN2020/113951 WO2021114765A1 (en) | 2019-12-09 | 2020-09-08 | Depth image-based method and system for anti-trailing detection of self-service channel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911249118.7A CN111144231B (en) | 2019-12-09 | 2019-12-09 | Self-service channel anti-trailing detection method and system based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111144231A CN111144231A (en) | 2020-05-12 |
CN111144231B true CN111144231B (en) | 2022-04-15 |
Family
ID=70518511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911249118.7A Active CN111144231B (en) | 2019-12-09 | 2019-12-09 | Self-service channel anti-trailing detection method and system based on depth image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111144231B (en) |
WO (1) | WO2021114765A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144231B (en) * | 2019-12-09 | 2022-04-15 | 深圳市鸿逸达科技有限公司 | Self-service channel anti-trailing detection method and system based on depth image |
CN112070052A (en) * | 2020-09-16 | 2020-12-11 | 青岛维感科技有限公司 | Interval monitoring method, device and system and storage medium |
CN112836634B (en) * | 2021-02-02 | 2024-03-08 | 厦门瑞为信息技术有限公司 | Multi-sensor information fusion gate anti-trailing method, device, equipment and medium |
CN117392585B (en) * | 2023-10-24 | 2024-06-18 | 广州广电运通智能科技有限公司 | Gate traffic detection method and device, electronic equipment and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521646A (en) * | 2011-11-11 | 2012-06-27 | 浙江捷尚视觉科技有限公司 | Complex scene people counting algorithm based on depth information cluster |
CN104268851A (en) * | 2014-09-05 | 2015-01-07 | 浙江捷尚视觉科技股份有限公司 | ATM self-service business hall behavior analysis method based on depth information |
CN105654021A (en) * | 2014-11-12 | 2016-06-08 | 株式会社理光 | Method and equipment for detecting target position attention of crowd |
CN105787469A (en) * | 2016-03-25 | 2016-07-20 | 广州市浩云安防科技股份有限公司 | Method and system for pedestrian monitoring and behavior recognition |
CN106530310A (en) * | 2016-10-25 | 2017-03-22 | 深圳大学 | Pedestrian counting method and device based on human head top recognition |
CN107221175A (en) * | 2017-05-31 | 2017-09-29 | 深圳市鸿逸达科技有限公司 | A kind of pedestrian is intended to detection method and system |
CN107423679A (en) * | 2017-05-31 | 2017-12-01 | 深圳市鸿逸达科技有限公司 | A kind of pedestrian is intended to detection method and system |
CN108241177A (en) * | 2016-12-26 | 2018-07-03 | 航天信息股份有限公司 | The anti-trailing detecting system and detection method of single transit passage |
CN108280952A (en) * | 2018-01-25 | 2018-07-13 | 盛视科技股份有限公司 | Passenger trailing monitoring method based on foreground object segmentation |
CN108876968A (en) * | 2017-05-10 | 2018-11-23 | 北京旷视科技有限公司 | Recognition of face gate and its anti-trailing method |
CN109271847A (en) * | 2018-08-01 | 2019-01-25 | 阿里巴巴集团控股有限公司 | Method for detecting abnormality, device and equipment in unmanned clearing scene |
CN109858329A (en) * | 2018-12-15 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Anti- trailing method, apparatus, equipment and storage medium based on recognition of face |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400148B (en) * | 2013-08-02 | 2015-04-01 | 上海泓申科技发展有限公司 | Video analysis-based bank self-service area tailgating behavior detection method |
CN103971380B (en) * | 2014-05-05 | 2016-09-28 | 中国民航大学 | Pedestrian based on RGB-D trails detection method |
US20190130215A1 (en) * | 2016-04-21 | 2019-05-02 | Osram Gmbh | Training method and detection method for object recognition |
US10643337B2 (en) * | 2017-12-22 | 2020-05-05 | Symbol Technologies, Llc | Systems and methods for segmenting and tracking package walls in commercial trailer loading |
CN110378179B (en) * | 2018-05-02 | 2023-07-18 | 上海大学 | Subway ticket evasion behavior detection method and system based on infrared thermal imaging |
CN111144231B (en) * | 2019-12-09 | 2022-04-15 | 深圳市鸿逸达科技有限公司 | Self-service channel anti-trailing detection method and system based on depth image |
-
2019
- 2019-12-09 CN CN201911249118.7A patent/CN111144231B/en active Active
-
2020
- 2020-09-08 WO PCT/CN2020/113951 patent/WO2021114765A1/en active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521646A (en) * | 2011-11-11 | 2012-06-27 | 浙江捷尚视觉科技有限公司 | Complex scene people counting algorithm based on depth information cluster |
CN104268851A (en) * | 2014-09-05 | 2015-01-07 | 浙江捷尚视觉科技股份有限公司 | ATM self-service business hall behavior analysis method based on depth information |
CN105654021A (en) * | 2014-11-12 | 2016-06-08 | 株式会社理光 | Method and equipment for detecting target position attention of crowd |
CN105787469A (en) * | 2016-03-25 | 2016-07-20 | 广州市浩云安防科技股份有限公司 | Method and system for pedestrian monitoring and behavior recognition |
CN106530310A (en) * | 2016-10-25 | 2017-03-22 | 深圳大学 | Pedestrian counting method and device based on human head top recognition |
CN108241177A (en) * | 2016-12-26 | 2018-07-03 | 航天信息股份有限公司 | The anti-trailing detecting system and detection method of single transit passage |
CN108876968A (en) * | 2017-05-10 | 2018-11-23 | 北京旷视科技有限公司 | Recognition of face gate and its anti-trailing method |
CN107221175A (en) * | 2017-05-31 | 2017-09-29 | 深圳市鸿逸达科技有限公司 | A kind of pedestrian is intended to detection method and system |
CN107423679A (en) * | 2017-05-31 | 2017-12-01 | 深圳市鸿逸达科技有限公司 | A kind of pedestrian is intended to detection method and system |
CN108280952A (en) * | 2018-01-25 | 2018-07-13 | 盛视科技股份有限公司 | Passenger trailing monitoring method based on foreground object segmentation |
CN109271847A (en) * | 2018-08-01 | 2019-01-25 | 阿里巴巴集团控股有限公司 | Method for detecting abnormality, device and equipment in unmanned clearing scene |
CN109858329A (en) * | 2018-12-15 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Anti- trailing method, apparatus, equipment and storage medium based on recognition of face |
Non-Patent Citations (3)
Title |
---|
Using data assimilation method to predict people flow in areas of incomplete data availability;Yongwei Xu et al.;《 2016 IEEE Global Humanitarian Technology Conference》;20170216;第845-846页 * |
个人防尾随系统的设计与实现;杜明蔚 等;《科技视界》;20181231;第22-24页 * |
基于U型卷积神经网络的航空影像建筑物检测;伍广明 等;《测绘学报》;20180630;第47卷(第6期);第864-872页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111144231A (en) | 2020-05-12 |
WO2021114765A1 (en) | 2021-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111144231B (en) | Self-service channel anti-trailing detection method and system based on depth image | |
Soilán et al. | Segmentation and classification of road markings using MLS data | |
US8655078B2 (en) | Situation determining apparatus, situation determining method, situation determining program, abnormality determining apparatus, abnormality determining method, abnormality determining program, and congestion estimating apparatus | |
Fefilatyev et al. | Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system | |
US8509486B2 (en) | Vehicle license plate recognition method and system thereof | |
KR101731243B1 (en) | A video surveillance apparatus for identification and tracking multiple moving objects with similar colors and method thereof | |
CN100589561C (en) | Dubious static object detecting method based on video content analysis | |
CN107662872A (en) | The monitoring system and its monitoring method of passenger conveyor | |
EP0567059A1 (en) | Object recognition system and abnormality detection system using image processing | |
CN104935879A (en) | Multi-View Human Detection Using Semi-Exhaustive Search | |
JP2004058737A (en) | Safety monitoring device in station platform | |
US11830274B2 (en) | Detection and identification systems for humans or objects | |
CN106156695B (en) | Outlet and/or entrance area recognition methods and device | |
Stahlschmidt et al. | Applications for a people detection and tracking algorithm using a time-of-flight camera | |
KR102228395B1 (en) | Apparatus, system and method for analyzing images using divided images | |
CN104573697A (en) | Construction hoist lift car people counting method based on multi-information fusion | |
EP2546807B1 (en) | Traffic monitoring device | |
CN113658427A (en) | Road condition monitoring method, system and equipment based on vision and radar | |
Yang et al. | On-road collision warning based on multiple FOE segmentation using a dashboard camera | |
Wu et al. | Smartphone zombie detection from lidar point cloud for mobile robot safety | |
US8594438B2 (en) | Method for the identification of objects | |
Ling et al. | A multi-pedestrian detection and counting system using fusion of stereo camera and laser scanner | |
KR101560810B1 (en) | Space controled method and apparatus for using template image | |
Börcs et al. | Dynamic 3D environment perception and reconstruction using a mobile rotating multi-beam Lidar scanner | |
Khuc | Computer vision based structural identification framework for bridge health mornitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |