CN114596657B - Gate passing system based on depth data - Google Patents

Gate passing system based on depth data Download PDF

Info

Publication number
CN114596657B
CN114596657B CN202210125630.6A CN202210125630A CN114596657B CN 114596657 B CN114596657 B CN 114596657B CN 202210125630 A CN202210125630 A CN 202210125630A CN 114596657 B CN114596657 B CN 114596657B
Authority
CN
China
Prior art keywords
distance
height
depth
pedestrian
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210125630.6A
Other languages
Chinese (zh)
Other versions
CN114596657A (en
Inventor
林春雨
王会心
王昱婷
贺桢
聂浪
赵耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202210125630.6A priority Critical patent/CN114596657B/en
Publication of CN114596657A publication Critical patent/CN114596657A/en
Application granted granted Critical
Publication of CN114596657B publication Critical patent/CN114596657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Dentistry (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a gate passing system based on depth data, which comprises: the system comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the data acquisition module is used for data depth acquisition, the data alignment module aligns the RGB color map with the depth data, the pedestrian positioning module identifies the color picture acquired by the depth camera through a YOLOv3 target detection algorithm, frames pedestrians and stores the position information of the frames; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; and the data storage module directly stores all acquired and detected data into the database system for retention and later evidence collection. The system can calculate the height to distinguish adults from children, measure the distance between pedestrians and the front and rear pedestrians, identify continuous passers and realize more intelligent gate passing detection.

Description

Gate passing system based on depth data
Technical Field
The invention relates to the technical field of urban rail transit, in particular to a gate passing system based on depth data.
Background
With the development of Chinese economy, the population is increased gradually, taking subways and high-speed rails is the primary choice for most people to travel, wherein the subway has high people flow density, the people flow reaches the highest value in rush hours, and a large burden is caused to the management of the subways.
The gate system is an important facility for controlling the travelling speed of pedestrians, when the density of the traffic is high, the gate is easy to have the problems of reduced recognition accuracy, incapability of recognizing continuous passers and the like, so that the ticket following and escaping problem is difficult to control. Meanwhile, as adults and accompanying children in the crowd cannot be distinguished, potential safety hazards are brought to the children in the case of large traffic.
Based on the problems, intelligent detection is performed by utilizing technologies such as computer vision or sensors and the like in China, ticket checking efficiency and accuracy are improved, and traffic pressure is effectively relieved. Some students have researched schemes such as artificial intelligence binocular gate, face recognition gate and the like, and further upgrade gate systems.
The error of detection by using the sensor is 10% -15%, the precision is low, the application scene is less, and large articles such as backpacks carried by pedestrians cannot be detected.
The adopted artificial intelligent binocular technology is to place a binocular sensor right above a gate, and realize judgment of a plurality of targets in a visual field through depth image calculation and recognition. Compared with a photoelectric sensor, the technology greatly improves the detection precision, but has smaller visual field range due to the fact that information is vertically collected.
These systems above are to be further optimized in terms of detection accuracy and field of view.
Disclosure of Invention
The present invention is directed to a gate passing system based on depth data, which solves the problems of the prior art in the background discussion.
The technical scheme of the invention is as follows:
a gate traffic system based on depth data, comprising: the system comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the data acquisition module performs data depth acquisition through a depth camera, wherein the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed at a height of 2.3 meters from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module recognizes the color photo acquired by the depth camera through a YOLOv3 target detection algorithm, frames pedestrians and stores the position information of the frames, so that the subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; and the data storage module directly stores all acquired and detected data into the database system for retention and later evidence collection.
Preferably, the specific working process of the height detection module is as follows: step one, importing depth information and frame position information, cutting the depth information, and only reserving the depth information in the frame, so that the area where the pedestrian is located is more intensively processed, and interference of the environment outside the pedestrian is eliminated; step two, establishing a mathematical model and calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the pedestrian head, the distance MaxPeth between the depth camera and the top of the pedestrian head deepen to the ground point, the vertical height distance KinectHeight of the depth camera from the ground, and the method is as followsCalculating Height of pedestrians; and thirdly, calculating the height of the pedestrian through the process for a plurality of times according to the pedestrian pictures acquired in different scenes, taking an average value, and then calculating the actual height Personheight of the specific pedestrian.
Preferably, the specific process of measuring the distance between pedestrians by the distance detection module is as follows: step one, calculating depth phasePitch angle α of the machine: α=arctan (Dmin/KinectHeight) and vertical view θ of the depth camera: θ=arctan (Dmax/KinectHeight) - α, where the actual distance of the bottom edge of the image from the depth camera is Dmax, the actual distance of the top edge of the image from the depth camera is set to Dmin, and the vertical height distance of the depth camera from the ground is KinectHeight; step two, according to the proportional relation formula:obtaining a calculation formula of the Ylength: the method comprises the steps of (1) obtaining a photo image, namely, ylength=Kinecthheight (alpha+delta theta), wherein Ylength is the vertical distance from an ordinate Y0 of one point of the image to a depth camera, photoheight is the height of a picture acquired by kinect, dq is the included angle between the head of a person and the bottom of the picture in the acquired picture, and q is the vertical field view angle, namely, the included angle between the top and the bottom of the picture; step three, taking the height Personheight of the pedestrian calculated by the height detection module, and calculating the ground horizontal distance TempDis between the foot of the pedestrian and the depth camera as follows:and step four, the calculated TempDis of the two pedestrians is subjected to difference, so that the distance between the pedestrians is obtained.
The intelligent pedestrian detection system is based on a pedestrian recognition technology of computer vision and deep learning, uses RGB and a depth camera, utilizes depth data to realize intelligent pedestrian detection, calculates the height to distinguish adults from children, measures the distance between pedestrians before and after, and recognizes continuous passers to realize more intelligent gate passing detection. In particular, the system has the following advantages:
(1) High precision: measuring and calculating the height error by about 1%, and accurately measuring the height of the pedestrian to distinguish adults and children; the pedestrian spacing is effectively controlled to detect the behavior of the trailing illegal passing gate.
(2) And (3) carrying out algorithm innovation: the traditional monocular distance measuring algorithm can only measure plane distance, the camera can only shoot parallel to the ground, depth information in a three-dimensional scene cannot be detected, height information obtained by the monocular distance measuring algorithm and the height detecting module is fused, the monocular distance measuring algorithm is improved, and two-dimensional to three-dimensional scene reconstruction is achieved.
(3) The visual field is wide, and the method is efficient and quick: compared with a common gate system, the project camera shoots from high to low, has wide visual angle, reduces shielding among people, facilitates information acquisition, can rapidly acquire multi-person information under the condition of large crowd, and is high-efficiency in detection and reduces queuing time. In addition, the deep learning algorithm is adopted to frame the photographed picture, and on the basis, only the selected pedestrian is subjected to depth information extraction, so that the overall efficiency of the picture processing is improved.
(4) The method has better expandability: compared with a common gate, the system can detect pedestrian passing speed according to visual information, realize contactless ticket checking, pedestrian large luggage detection and the like according to face recognition, and has strong expandability
Drawings
FIG. 1 is a complete flow chart of a gate passing system based on depth data according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of height detection in a gate passing system based on depth data according to an embodiment of the present invention;
FIG. 3 is a side view of a monocular ranging geometry in a gate traffic system based on depth data according to an embodiment of the present invention;
FIG. 4 is a top view of a monocular ranging geometry in a gate traffic system based on depth data according to an embodiment of the present invention;
FIG. 5 is a schematic plan view of a monocular ranging geometry in a gate traffic system based on depth data according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a two-person distance measurement model based on monocular ranging in a gate passing system based on depth data according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
For the purpose of facilitating an understanding of the embodiments of the invention, reference will now be made to the drawings of several specific embodiments illustrated in the drawings and in no way should be taken to limit the embodiments of the invention.
As shown in fig. 1, a gate traffic system based on depth data includes: the system comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the data acquisition module performs data depth acquisition through a depth camera, wherein the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed at a height of 2.3 meters from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module recognizes the color photo acquired by the depth camera through a YOLOv3 target detection algorithm, frames pedestrians and stores the position information of the frames, so that the subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; and the data storage module directly stores all acquired and detected data into the database system for retention and later evidence collection.
As shown in FIG. 2, the specific working process of the height detection module is as follows: step one, importing depth information and frame position information, cutting the depth information, and only reserving the depth information in the frame, so that the area where the pedestrian is located is more intensively processed, and interference of the environment outside the pedestrian is eliminated; step two, establishing a mathematical model and calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the pedestrian head, the distance MaxPeth between the depth camera and the top of the pedestrian head deepen to the ground point, the vertical height distance KinectHeight of the depth camera from the ground, and the method is as followsCalculation ofHeight of the pedestrian is obtained; and thirdly, calculating the height of the pedestrian through the process for a plurality of times according to the pedestrian pictures acquired in different scenes, taking an average value, and then calculating the actual height Personheight of the specific pedestrian.
Table 1 is measurement data, and for each scene, multiple frames of pictures are continuously taken and averaged to eliminate the effect of errors. The experimental result shows that the height detection error is controlled to be about 1%, the precision is high, and the method can be used as the basis for judging the height of the pedestrian.
TABLE 1 data for height detection (Unit: cm)
As shown in fig. 3, the specific process of measuring the distance between pedestrians by the distance detection module is as follows: step one, calculating a pitch angle alpha of a depth camera: α=arctan (Dmin/KinectHeight) and vertical view θ of the depth camera: θ=arctan (Dmax/KinectHeight) - α, where the actual distance of the bottom edge of the image from the depth camera is Dmax, the actual distance of the top edge of the image from the depth camera is set to Dmin, and the vertical height distance of the depth camera from the ground is KinectHeight;
as shown in fig. 4 and fig. 5, the specific process of measuring the distance between pedestrians by the distance detection module includes a second step of:obtaining a calculation formula of the Ylength: the method comprises the steps of (1) obtaining a photo image, namely, ylength=Kinecthheight (alpha+delta theta), wherein Ylength is the vertical distance from an ordinate Y0 of one point of the image to a depth camera, photoheight is the height of a picture acquired by kinect, dq is the included angle between the head of a person and the bottom of the picture in the acquired picture, and q is the vertical field view angle, namely, the included angle between the top and the bottom of the picture;
as shown in fig. 6, the specific process of measuring and calculating the distance between pedestrians by the distance detection module, wherein, step three, the height of the pedestrians is taken into the height detection module to calculate the ground horizontal distance TempDis between the feet of the pedestrians and the depth camera as follows:and step four, the calculated TempDis of the two pedestrians is subjected to difference, so that the distance between the pedestrians is obtained.
After the distance detection model based on monocular ranging is completed, data under various scenes including fixed distance, retrograde, overrun, turning around obstacle and the like are collected, and the measurement results are shown in table 2.
Table 2 results of distance detection experiments (cm) in combination with monocular ranging
As can be seen in conjunction with the data of table 2, the range detection error based on monocular ranging is greatly reduced. Therefore, in the vision gate traffic system, monocular ranging is selected as a method of measuring pedestrian spacing.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (1)

1. A gate traffic system based on depth data, comprising: the system comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the data acquisition module performs data depth acquisition through a depth camera, the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed at a height of 2.3 meters from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module recognizes the color photo acquired by the depth camera through a YOLOv3 target detection algorithm, frames the pedestrian and stores the position information of the frame, so that the subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; the data storage module directly stores all acquired and detected data into a database system for retention and later evidence collection;
the specific working process of the height detection module is as follows: step one, importing depth information and frame position information, cutting the depth information, and only reserving the depth information in the frame, so that the area where the pedestrian is located is more intensively processed, and interference of the environment outside the pedestrian is eliminated; step two, establishing a mathematical model and calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the pedestrian, the distance MaxPeth between the depth camera and the top of the pedestrian deepen to the ground point, the vertical height distance KinectHeight of the depth camera from the ground, and the minimum distance MinDepth from the depth camera to the top of the pedestrian deepens to the ground point, wherein the minimum distance MaxPeth is the distance between the depth camera and the top of the pedestrian deepen to the ground point, the vertical height distance KinectHeight from the depth camera to the ground is calculated according to a formulaCalculating Height of pedestrians; step three, according to the pedestrian pictures acquired in different scenes, calculating the height of the pedestrian for a plurality of times through the process, taking an average value, and then calculating the actual height Personheight of the specific pedestrian; the specific process of measuring and calculating the distance between pedestrians by the distance detection module is as follows: step one, calculating a pitch angle alpha of the depth camera: α=arctan (Dmin/KinectHeight) and the vertical view θ of the depth camera: θ=arctan (Dmax/KinectHeight) - α, where the bottom edge of the image is spaced from the depth phaseThe actual distance of the camera is Dmax, the actual distance of the top edge of the image from the depth camera is set as Dmin, and the vertical height distance KinectHeight of the depth camera from the ground is set; step two, according to the proportional relation formula: />Obtaining a calculation formula of the Ylength: the method comprises the steps of (1) obtaining a depth camera, wherein, ylength=Kinecthheight (alpha+delta theta), ylength is the vertical distance from the ordinate Y0 of one point of an image to the depth camera, photoheight is the height of a picture acquired by kinect, dq is the included angle between the head of a person and the bottom of the picture in the acquired picture, and q is the vertical field view angle, namely the included angle between the top and the bottom of the picture; step three, bringing the height Personheight of the pedestrian calculated by the height detection module, and calculating the ground horizontal distance TempDis between the foot of the pedestrian and the depth camera as follows: />And step four, the calculated TempDis of the two pedestrians is subjected to difference, so that the distance between the pedestrians is obtained.
CN202210125630.6A 2022-02-10 2022-02-10 Gate passing system based on depth data Active CN114596657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210125630.6A CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210125630.6A CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Publications (2)

Publication Number Publication Date
CN114596657A CN114596657A (en) 2022-06-07
CN114596657B true CN114596657B (en) 2023-07-25

Family

ID=81806890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210125630.6A Active CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Country Status (1)

Country Link
CN (1) CN114596657B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117765651A (en) * 2023-12-27 2024-03-26 暗物质(北京)智能科技有限公司 Gate passing identification method and system based on top view visual angle depth fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024986A (en) * 2000-07-06 2002-01-25 Nippon Signal Co Ltd:The Pedestrian detector
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112507781A (en) * 2020-10-21 2021-03-16 天津中科智能识别产业技术研究院有限公司 Multi-dimensional multi-modal group biological feature recognition system and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5165540B2 (en) * 2008-11-20 2013-03-21 日本信号株式会社 Height detection system and automatic ticket gate using the same
CN112131917A (en) * 2019-06-25 2020-12-25 北京京东尚科信息技术有限公司 Measurement method, apparatus, system, and computer-readable storage medium
CN110705432B (en) * 2019-09-26 2022-10-25 长安大学 Pedestrian detection device and method based on color and depth cameras
CN112880642B (en) * 2021-03-01 2023-06-13 苏州挚途科技有限公司 Ranging system and ranging method
CN113749646A (en) * 2021-09-03 2021-12-07 中科视语(北京)科技有限公司 Monocular vision-based human body height measuring method and device and electronic equipment
CN113781578B (en) * 2021-09-09 2024-05-28 南京康尼电子科技有限公司 Gate passing behavior identification and control method combining target detection and binocular vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024986A (en) * 2000-07-06 2002-01-25 Nippon Signal Co Ltd:The Pedestrian detector
CN112507781A (en) * 2020-10-21 2021-03-16 天津中科智能识别产业技术研究院有限公司 Multi-dimensional multi-modal group biological feature recognition system and method
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于RGB-D图像与头肩区域编码的实时人流量统计;张汝峰;胡钊政;;交通信息与安全(第06期);全文 *
基于多向点云拼合的快速三维人体测量技术研究;姜健芃;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN114596657A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
US11922643B2 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
US11900619B2 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN102389361B (en) Blindman outdoor support system based on computer vision
WO2016015547A1 (en) Machine vision-based method and system for aircraft docking guidance and aircraft type identification
CN109472831A (en) Obstacle recognition range-measurement system and method towards road roller work progress
CN104517095B (en) A kind of number of people dividing method based on depth image
WO2015096507A1 (en) Method for recognizing and locating building using constraint of mountain contour region
CN102697508A (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN105286871A (en) Video processing-based body height measurement method
CN103559791A (en) Vehicle detection method fusing radar and CCD camera signals
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN108280401B (en) Pavement detection method and device, cloud server and computer program product
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
US10650249B2 (en) Method and device for counting pedestrians based on identification of head top of human body
CN103279791A (en) Pedestrian counting method based on multiple features
CN111079589B (en) Automatic height detection method based on depth camera shooting and height threshold value pixel calibration
CN111753797A (en) Vehicle speed measuring method based on video analysis
CN103544491A (en) Optical character recognition method and device facing complex background
CN104063882A (en) Vehicle video speed measuring method based on binocular camera
CN112329747A (en) Vehicle parameter detection method based on video identification and deep learning and related device
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
CN108416798A (en) A kind of vehicle distances method of estimation based on light stream
CN103632376A (en) Method for suppressing partial occlusion of vehicles by aid of double-level frames
CN114596657B (en) Gate passing system based on depth data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant