CN114596657A - Gate passing system based on depth data - Google Patents

Gate passing system based on depth data Download PDF

Info

Publication number
CN114596657A
CN114596657A CN202210125630.6A CN202210125630A CN114596657A CN 114596657 A CN114596657 A CN 114596657A CN 202210125630 A CN202210125630 A CN 202210125630A CN 114596657 A CN114596657 A CN 114596657A
Authority
CN
China
Prior art keywords
distance
height
data
depth
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210125630.6A
Other languages
Chinese (zh)
Other versions
CN114596657B (en
Inventor
林春雨
王会心
王昱婷
贺桢
聂浪
赵耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202210125630.6A priority Critical patent/CN114596657B/en
Publication of CN114596657A publication Critical patent/CN114596657A/en
Application granted granted Critical
Publication of CN114596657B publication Critical patent/CN114596657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a gate passing system based on depth data, which comprises: the device comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the pedestrian positioning module identifies a color photo acquired by the depth camera through a YOLOv3 target detection algorithm, frames out pedestrians and stores position information of the frames; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; the data storage module directly stores all the collected and detected data into the database system for storage and later evidence obtaining. Utilize this system can calculate the height in order to distinguish adult and children, pedestrian's interval around the simultaneous measurement, discernment passerby in succession realizes more intelligent floodgate machine current detection.

Description

Gate passing system based on depth data
Technical Field
The invention relates to the technical field of urban rail transit, in particular to a gate passing system based on depth data.
Background
With the development of economy in China, the population quantity is gradually increased, taking subways and high-speed railways is the first choice for most people to go out, wherein the pedestrian flow density of subways is high, the pedestrian flow reaches the highest value at the peak time of commuting, and a large burden is caused to the management of subways.
The gate system is an important facility for controlling the advancing speed of pedestrians, and when the density of people flow is high, the gate is prone to problems that the recognition accuracy is reduced, continuous pedestrians cannot be recognized, and the like, so that the problem of ticket following and ticket evading is difficult to control. Meanwhile, because adults and accompanying children in the crowd cannot be distinguished, potential safety hazards are brought to the children crowd under the condition of large pedestrian volume.
Based on the problems, technologies such as computer vision or sensors are developed domestically to carry out intelligent detection, the ticket checking efficiency and precision are improved, and traffic pressure is effectively relieved. Some scholars have developed artificial intelligence binocular floodgate machines, face identification floodgate machines and other schemes to further upgrade floodgate machine systems.
The error detected by the sensor is 10% -15%, the precision is low, the application scenes are few, and large articles such as backpacks carried by pedestrians cannot be detected.
The adopted artificial intelligence binocular technology is that a binocular sensor is placed right above a gate, and the judgment of a plurality of targets in a visual field is realized through depth image calculation and identification. Compared with a photoelectric sensor, the technology greatly improves the detection precision, but the visual field range is smaller due to vertical information acquisition.
The above systems are to be further optimized in terms of detection accuracy and field of view.
Disclosure of Invention
The invention aims to provide a gate passing system based on depth data, so as to solve the problems in the prior art in the background discussion.
The technical scheme of the invention is as follows:
a gate passage system based on depth data, comprising: the device comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the system comprises a data acquisition module, a depth camera, a camera and a control module, wherein the data acquisition module is used for carrying out data depth acquisition through the depth camera, the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed at a height of 2.3 meters from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module identifies the color photos collected by the depth camera through a YOLOv3 target detection algorithm, frames out pedestrians and stores the position information of the frames, so that subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; the data storage module directly stores all the collected and detected data into the database system for storage and later evidence obtaining.
Preferably, the specific working process of the height detection module is as follows: step one, importing depth information and position information of a frame, cutting the depth information, and only reserving the depth information in the frame, so that the area where pedestrians are located is processed more intensively, and the interference of the environment outside the pedestrians is eliminated; step two, establishing a mathematical model,calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the head of the pedestrian, the distance MaxDepth from the depth camera to the top of the head of the pedestrian to the ground point, and the vertical height distance Kinectheight from the depth camera to the ground are calculated according to a formula
Figure BDA0003500228510000021
Calculating the Height of the pedestrian; and step three, calculating the height of the pedestrian through the process for multiple times according to the pedestrian pictures acquired under different scenes, taking an average value, and then calculating the actual height Personheight of the specific pedestrian.
Preferably, the specific process of the distance detection module for measuring and calculating the distance between pedestrians is as follows: step one, calculating a pitch angle alpha of a depth camera: α ═ arctan (Dmin/KinectHeight) and the vertical viewing angle θ of the depth camera: θ is arctan (Dmax/KinectHeight) - α, where the actual distance from the bottom edge of the image to the depth camera is Dmax, the actual distance from the top edge of the image to the depth camera is set to Dmin, and the vertical height distance from the depth camera to the ground is KinectHeight; step two, according to a proportional relation formula:
Figure BDA0003500228510000031
obtaining a calculation formula of YLength: ylength ═ Kinectheight tan (α + Δ θ), where Ylength is the vertical distance from the ordinate Y0 of a point of the image to the depth camera, Photoheight is the height of the picture captured by kinect, Dq is the angle between the head of the person in the captured picture and the bottom of the picture, and q is the vertical field angle, i.e., the angle between the top and the bottom of the picture; and step three, substituting the height Personheight calculated in the height detection module, and calculating the ground horizontal distance TempDis between the foot of the pedestrian and the depth camera as follows:
Figure BDA0003500228510000032
and step four, subtracting the calculated TempDis of the two pedestrians to obtain the distance between the pedestrians.
This system uses RGB and degree of depth camera based on computer vision and the pedestrian recognition technique of degree of depth study, utilizes depth data to realize intelligent pedestrian and detects, calculates the height in order to distinguish adult and children, and pedestrian's interval around the simultaneous measurement discerns the passerby in succession, realizes more intelligent floodgate machine and passes through the detection. In particular, the system has the following advantages:
(1) high precision: measuring and calculating errors of about 1% of the height, and accurately measuring the height of the pedestrian to distinguish adults from children; the pedestrian distance is effectively controlled to detect the trailing illegal brake passing behavior.
(2) The algorithm is innovative: the traditional monocular distance measurement algorithm can only measure plane distance, a camera can only shoot in parallel to the ground, depth information under a three-dimensional scene cannot be detected, the monocular distance measurement algorithm and height information obtained by a height detection module are integrated, the monocular distance measurement algorithm is improved, and reconstruction of a two-dimensional scene to a three-dimensional scene is achieved.
(3) The visual field is wide, and is high-efficient swift: compare in ordinary floodgate machine system, this project camera is from the height toward low shooting, and the visual angle is wide, reduce and shelter from between the people, makes things convenient for information acquisition, under the big condition of crowd volume, can acquire many people's information fast, and high-efficient detection reduces the time of lining up. In addition, the pedestrian framing processing is carried out on the shot pictures by adopting a deep learning algorithm, and only the framed pedestrians are subjected to depth information extraction on the basis, so that the overall picture processing efficiency is improved.
(4) The method has better expandability: compared with a common gate, the system can detect the pedestrian passing speed according to visual information, realize non-contact ticket checking, pedestrian large luggage detection and the like by face recognition, and has strong expandability
Drawings
Fig. 1 is a complete flow chart of a gate passage system based on depth data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of height detection in a gate passage system based on depth data according to an embodiment of the present invention;
FIG. 3 is a side view of a single visual distance geometric relationship in a gate traffic system based on depth data according to an embodiment of the present invention;
FIG. 4 is a top view of a single-eye distance geometry of a gate traffic system based on depth data according to an embodiment of the present invention;
FIG. 5 is a schematic plan view of a single-vision distance geometric relationship in a gate traffic system based on depth data according to an embodiment of the present invention;
fig. 6 is a two-person distance measurement model based on monocular distance measurement in a gate passage system based on depth data according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
As shown in fig. 1, a gate passage system based on depth data includes: the device comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the system comprises a data acquisition module, a depth camera, a camera and a control module, wherein the data acquisition module is used for carrying out data depth acquisition through the depth camera, the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed 2.3 meters away from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module identifies the color photos collected by the depth camera through a YOLOv3 target detection algorithm, frames out pedestrians and stores the position information of the frames, so that subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; the data storage module directly stores all the collected and detected data into the database system for storage and later evidence obtaining.
As shown in FIG. 2, the specific working process of the height detection module is as follows: step one, importing depth information and position information of a frame, cutting the depth information, and only reserving the depth information in the frame, so that the area where pedestrians are located is processed more intensively, and the interference of the environment outside the pedestrians is eliminated; step two, establishing a mathematical model, and calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the head of the pedestrian, the distance MaxDepth from the depth camera to the top of the head of the pedestrian to the ground point, and the vertical height distance Kinectheight from the depth camera to the ground are calculated according to a formula
Figure BDA0003500228510000051
Calculating the Height of the pedestrian; and step three, calculating the height of the pedestrian through the process for multiple times according to the pedestrian pictures acquired under different scenes, taking an average value, and then calculating the actual height Personheight of the specific pedestrian.
Table 1 shows the measured data, for each scene, multiple frames of pictures were taken consecutively and averaged to eliminate the error effect. The experimental result shows that the height detection error is controlled to be about 1%, the precision is high, and the height detection error can be used as the basis for judging the height of the pedestrian.
TABLE 1 height detection data (unit: cm)
Figure BDA0003500228510000052
Figure BDA0003500228510000061
As shown in fig. 3, the specific process of the distance detection module to measure the distance between pedestrians is as follows: step one, calculating a pitch angle alpha of a depth camera: α ═ arctan (Dmin/KinectHeight) and the vertical view angle θ of the depth camera: θ is arctan (Dmax/KinectHeight) - α, where the actual distance from the bottom edge of the image to the depth camera is Dmax, the actual distance from the top edge of the image to the depth camera is set to Dmin, and the vertical height distance from the depth camera to the ground is KinectHeight;
as shown in fig. 4 and 5, the distance detection module measures the distance between pedestrians, wherein in the second step, according to the proportional relation formula:
Figure BDA0003500228510000062
obtaining a calculation formula of YLength: ylength ═ Kinectheight tan (α + Δ θ), where Ylength is the vertical distance from the ordinate Y0 of a point of the image to the depth camera, Photoheight is the height of the picture captured by kinect, Dq is the angle between the head of the person in the captured picture and the bottom of the picture, and q is the vertical field angle, i.e., the angle between the top and the bottom of the picture;
as shown in fig. 6, the specific process of the distance detection module measuring and calculating the distance between pedestrians, wherein, in step three, the height personnight of the pedestrian calculated in the height detection module is taken into the step three, and the ground level distance TempDis between the foot of the pedestrian and the depth camera is calculated as:
Figure BDA0003500228510000063
and step four, subtracting the calculated TempDis of the two pedestrians to obtain the distance between the pedestrians.
After the monocular distance measurement-based distance detection model is completed, data under various scenes including fixed distance, retrograde motion, overtaking, turning around obstacles and the like are collected, and the measurement results are shown in table 2.
TABLE 2 distance measurement experiment results (cm) in combination with monocular distance measurement
Figure BDA0003500228510000064
Figure BDA0003500228510000071
In combination with the data in table 2, it can be seen that the distance detection error based on monocular distance measurement is greatly reduced. Therefore, in the vision gate traffic system, monocular distance measurement is selected as a method of measuring the pedestrian distance.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. A gate passage system based on depth data, comprising: the device comprises a data acquisition module, a data alignment module, a pedestrian positioning module, a height detection module, a distance detection module and a data storage module; the data acquisition module is used for carrying out data depth acquisition through a depth camera, the depth camera is Microsoft Kinect, the depth camera is equipment for simultaneously acquiring RGB and depth, the depth camera is placed 2.3 m away from the ground, and a camera of the depth camera shoots downwards at a pitch angle of about 45 degrees with a horizontal line; the data alignment module is used for aligning the RGB color map and the depth data and can be realized by a checkerboard calibration method; the pedestrian positioning module identifies the color photos collected by the depth camera through a YOLOv3 target detection algorithm, frames pedestrians and stores the position information of the frames, so that subsequent calculation and use are facilitated; the height detection module is used for collecting height data of pedestrians; the distance detection module is used for calculating the distance between pedestrians; the data storage module directly stores all collected and detected data into a database system for storage and later evidence obtaining.
2. The gate traffic system based on depth data of claim 1, wherein the height detection module works in the following specific processes: step one, importing depth information and position information of a frame, cutting the depth information, and only reserving the depth information in the frame, so that the area where pedestrians are located is processed more intensively, and the interference of the environment outside the pedestrians is eliminated; step two, establishing a mathematical model and calculating the height of the pedestrian: the minimum distance MinDepth from the depth camera to the top of the head of the pedestrian, the distance MaxDepth from the depth camera to the top of the head of the pedestrian to the ground point, and the vertical height distance Kinectheight from the depth camera to the ground are calculated according to a formula
Figure FDA0003500228500000011
Calculating the Height of the pedestrian; and step three, calculating the height of the pedestrian through the process for multiple times according to the pedestrian pictures acquired under different scenes, taking an average value, and then calculating the actual height Personheight of the specific pedestrian.
3. The gate traffic system based on depth data as claimed in claim 1, wherein the distance detection module measures the distance between pedestrians by: step one, calculating a pitch angle alpha of the depth camera: α ═ arctan (Dmin/KinectHeight) and the vertical view angle θ of the depth camera: θ is arctan (Dmax/KinectHeight) - α, where the actual distance from the bottom edge of the image to the depth camera is Dmax, the actual distance from the top edge of the image to the depth camera is set to Dmin, and the vertical height distance from the depth camera to the ground is KinectHeight; step two, according to a proportional relation formula:
Figure FDA0003500228500000021
obtaining a calculation formula of YLength: the length is a vertical distance from an ordinate Y0 of one point of the image to the depth camera, the Photoheight is a height of a kinect collected picture, Dq is an included angle between the head of a person and the bottom of the picture in the collected picture, and q is a vertical field view angle, namely an included angle between the top and the bottom of the picture; step three, substituting the height Personheight calculated in the height detection module, and calculating the ground horizontal distance TempDis between the foot of the pedestrian and the depth camera as follows:
Figure FDA0003500228500000022
step four, calculating twoAnd (5) making a difference on the TempDis of the pedestrian so as to obtain the distance between the pedestrians.
CN202210125630.6A 2022-02-10 2022-02-10 Gate passing system based on depth data Active CN114596657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210125630.6A CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210125630.6A CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Publications (2)

Publication Number Publication Date
CN114596657A true CN114596657A (en) 2022-06-07
CN114596657B CN114596657B (en) 2023-07-25

Family

ID=81806890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210125630.6A Active CN114596657B (en) 2022-02-10 2022-02-10 Gate passing system based on depth data

Country Status (1)

Country Link
CN (1) CN114596657B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024986A (en) * 2000-07-06 2002-01-25 Nippon Signal Co Ltd:The Pedestrian detector
JP2010122078A (en) * 2008-11-20 2010-06-03 Nippon Signal Co Ltd:The Height detection system, and automatic ticket gate using the same
CN110705432A (en) * 2019-09-26 2020-01-17 长安大学 Pedestrian detection device and method based on color and depth cameras
CN112131917A (en) * 2019-06-25 2020-12-25 北京京东尚科信息技术有限公司 Measurement method, apparatus, system, and computer-readable storage medium
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112507781A (en) * 2020-10-21 2021-03-16 天津中科智能识别产业技术研究院有限公司 Multi-dimensional multi-modal group biological feature recognition system and method
CN112880642A (en) * 2021-03-01 2021-06-01 苏州挚途科技有限公司 Distance measuring system and distance measuring method
CN113749646A (en) * 2021-09-03 2021-12-07 中科视语(北京)科技有限公司 Monocular vision-based human body height measuring method and device and electronic equipment
CN113781578A (en) * 2021-09-09 2021-12-10 南京康尼电子科技有限公司 Gate passing behavior identification and control method combining target detection and binocular vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024986A (en) * 2000-07-06 2002-01-25 Nippon Signal Co Ltd:The Pedestrian detector
JP2010122078A (en) * 2008-11-20 2010-06-03 Nippon Signal Co Ltd:The Height detection system, and automatic ticket gate using the same
CN112131917A (en) * 2019-06-25 2020-12-25 北京京东尚科信息技术有限公司 Measurement method, apparatus, system, and computer-readable storage medium
CN110705432A (en) * 2019-09-26 2020-01-17 长安大学 Pedestrian detection device and method based on color and depth cameras
CN112507781A (en) * 2020-10-21 2021-03-16 天津中科智能识别产业技术研究院有限公司 Multi-dimensional multi-modal group biological feature recognition system and method
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112880642A (en) * 2021-03-01 2021-06-01 苏州挚途科技有限公司 Distance measuring system and distance measuring method
CN113749646A (en) * 2021-09-03 2021-12-07 中科视语(北京)科技有限公司 Monocular vision-based human body height measuring method and device and electronic equipment
CN113781578A (en) * 2021-09-09 2021-12-10 南京康尼电子科技有限公司 Gate passing behavior identification and control method combining target detection and binocular vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜健芃: "基于多向点云拼合的快速三维人体测量技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
张汝峰;胡钊政;: "基于RGB-D图像与头肩区域编码的实时人流量统计", 交通信息与安全, no. 06 *

Also Published As

Publication number Publication date
CN114596657B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US11900619B2 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
US11922643B2 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
CN102389361B (en) Blindman outdoor support system based on computer vision
CN109934848B (en) Method for accurately positioning moving object based on deep learning
US7899211B2 (en) Object detecting system and object detecting method
CN104517095B (en) A kind of number of people dividing method based on depth image
CN102156537A (en) Equipment and method for detecting head posture
CN105286871A (en) Video processing-based body height measurement method
WO2015096507A1 (en) Method for recognizing and locating building using constraint of mountain contour region
CN108280401B (en) Pavement detection method and device, cloud server and computer program product
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN107430774A (en) Travel identification device and use its travel assist system
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
Li et al. Automatic parking slot detection based on around view monitor (AVM) systems
CN103438834A (en) Hierarchy-type rapid three-dimensional measuring device and method based on structured light projection
CN202058221U (en) Passenger flow statistic device based on binocular vision
Krinidis et al. A robust and real-time multi-space occupancy extraction system exploiting privacy-preserving sensors
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN108416798A (en) A kind of vehicle distances method of estimation based on light stream
JPH09297849A (en) Vehicle detector
CN108876798A (en) A kind of stair detection system and method
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant