CN113781578B - Gate passing behavior identification and control method combining target detection and binocular vision - Google Patents
Gate passing behavior identification and control method combining target detection and binocular vision Download PDFInfo
- Publication number
- CN113781578B CN113781578B CN202111058477.1A CN202111058477A CN113781578B CN 113781578 B CN113781578 B CN 113781578B CN 202111058477 A CN202111058477 A CN 202111058477A CN 113781578 B CN113781578 B CN 113781578B
- Authority
- CN
- China
- Prior art keywords
- pedestrians
- gate
- distance
- passing
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 230000002159 abnormal effect Effects 0.000 claims abstract description 18
- 230000000007 visual effect Effects 0.000 claims abstract description 5
- 230000006399 behavior Effects 0.000 claims description 61
- 238000012549 training Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 abstract 1
- 238000013519 translation Methods 0.000 description 5
- 238000009434 installation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010061619 Deformity Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a gate passing behavior recognition and control method combining target detection and binocular vision, which comprises the steps of installing binocular vision equipment on the top of a gate, calibrating a camera to obtain an internal reference matrix and an external reference matrix required by binocular distance measurement, obtaining a parallax map through binocular vision matching, detecting targets such as pedestrians, luggage, wheelchairs and the like in a visual field range by using a trained target detection model, judging attributes such as height and luggage size of the pedestrians through the parallax map, judging passing logic of the pedestrians by combining coordinate positions and attribute information of the targets such as continuous multi-frame pedestrians and luggage and controlling opening and closing of a gate door or a scissor door according to the passing behavior, and generating acousto-optic alarm information for abnormal passing behavior. The intelligent control method can effectively improve the intelligent level of the gate and improve the passing efficiency and safety of the gate.
Description
Technical Field
The invention relates to a gate passing behavior recognition and control method combining target detection and binocular vision, and belongs to the technical field of traffic intelligent recognition.
Background
With the development of urban rail transit, more and more urban people select to travel by taking subways, and the gate is used as a necessary passage control channel, so that an important role is played in the rail transit.
In the existing gate traffic control technology, passenger traffic behavior is judged mainly through 16 pairs of infrared correlation sensors, passenger traffic state is judged through shielding conditions of the infrared correlation sensors, and the traffic logic judging method cannot effectively identify partial traffic logic, for example, the difference between children carried by adults and trailing gate running cannot be distinguished, the trip-down and trip-up behaviors cannot be identified, and luggage and passengers cannot be distinguished.
In order to be able to more accurately identify the behavior of the passenger traffic, it is highly desirable for those skilled in the art to improve the existing brake identification and control methods.
Disclosure of Invention
The purpose is as follows: in order to solve the problem that the existing gate is insufficient in passenger passing behavior identification, the invention provides a gate passing behavior identification and control method combining target detection and binocular vision.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
a gate passing behavior recognition and control method combining target detection and binocular vision comprises the following steps:
When the object detection module identifies a pedestrian and simultaneously identifies baggage, the height H of the baggage is calculated, the distance D1 between the person and the baggage is calculated, if H > alpha, D1< beta 1, the passenger is judged to carry a large piece of baggage, and the passing behavior is normal passing. Where α is a judgment threshold value of the size baggage and β 1 is a threshold value of the distance between the pedestrian and the baggage.
When the target detection module identifies two pedestrians, the height of the two pedestrians is calculated, the distance D2 between the two pedestrians is calculated, if a person is judged to be an adult, a person is a child, the distance D2 is less than beta 2, the passengers are judged to carry the child, and the passing behavior is normal passing. Wherein β 2 is a threshold value of pedestrian-to-pedestrian distance.
When the object detection module identifies a pedestrian and does not identify luggage, the object detection module judges that the object detection module is a single passenger and the passing behavior is normal passing.
When the target detection module identifies two pedestrians, the heights of the two pedestrians are calculated, if the two pedestrians are judged to be adults, the distance D3 between the two pedestrians is calculated, and if the distance D3< beta 3, the following behavior is judged, and the passing behavior is abnormal passing. Beta 3 is the threshold for pedestrian-to-pedestrian distance.
When the target detection module identifies a pedestrian, but the gate door is not opened, and the pedestrian height is lower than the gate door height, the pedestrian is judged to be a tripping behavior, and the traffic behavior is abnormal traffic.
When the target detection module identifies a pedestrian, but the gate door is not opened, and the height of the pedestrian is higher than that of the gate door, the pedestrian is judged to be a jump-up behavior, and the traffic behavior is abnormal traffic.
After the card swiping action occurs, the target detection module identifies two pedestrians, the heights of the two pedestrians and the distance between the two pedestrians are calculated, if the two pedestrians are judged to be adults, the distance between the two pedestrians is changed from small to large, the two pedestrians are staggered, the pedestrians at the rear are judged to suddenly cross the previous pedestrians to enter the gate for illegal passing, and the passing action is abnormal passing.
Preferably, the method further comprises the following steps:
And sending the result of the judgment of the passing behavior to the gate, wherein the gate generates corresponding control, and for normal passing behavior, according to the real-time intervals among pedestrians, objects and the gate, when the pedestrians and the objects are in the gate, the gate door of the gate cannot be closed, and for abnormal passing behavior, corresponding audible and visual alarm information is played.
Preferably, the target detection module obtains the following steps:
And collecting targets in the parallax images, marking the categories of the targets, forming training samples by the targets and the categories of the targets, training a target detection model by using the training samples, and detecting the categories of the targets in the region shot by the binocular vision equipment by using the trained target detection model.
Preferably, the disparity map obtaining step includes:
And (5) acquiring calibration pictures of the targets by using binocular vision equipment at the top of the gate and a black-and-white checkerboard calibration plate at the bottom of the gate.
And calibrating the data of the calibration pictures acquired by the left camera and the right camera by using a calibration tool, and solving the internal parameter matrixes of the left camera and the right camera.
And carrying out three-dimensional calibration and alignment on the internal parameter matrix result to obtain an external parameter matrix of the binocular vision equipment.
And correcting the images acquired by the binocular vision equipment by using the inner parameter matrixes of the left camera and the right camera and the outer parameter matrix of the binocular vision equipment, so that the polar lines of the images of the binocular cameras are parallel, and adjusting the corresponding polar lines to the same horizontal line.
And generating a parallax image of each frame of picture acquired by the binocular vision equipment by utilizing a stereo matching algorithm.
Preferably, the target height and the distance between the targets are obtained as follows:
and calculating the distance between the center points of the two target tag frames through the coordinate information of the tag frames to obtain the distance between the two targets.
The label frame acquisition steps are as follows:
And (3) determining the region with the gray value changed obviously in the parallax map by scanning the parallax map, and marking the regions corresponding to the targets with different depths in the gate with label frames.
The distance between the tag frame and the camera is calculated as follows:
depth=(f*baseline)/disp
wherein depth represents the distance, f represents Jiao Ju f in the internal reference data matrix, baseline is the distance between the optical centers of the two cameras, called the baseline distance, determined by the physical position of the camera installation, disp is the gray value of each coordinate point in the disparity map.
The beneficial effects are that: according to the gate passing behavior recognition and control method combining target detection and binocular vision, a target detection technology is introduced, targets such as passengers, luggage and wheelchairs are accurately recognized, the height and the size of the luggage are accurately calculated by adopting the binocular vision technology, and passenger passing logic is judged according to the results of the target detection and binocular vision, so that the gate intelligence level can be effectively improved, and the passing efficiency and safety are improved.
The object detection and binocular vision equipment is added to the existing gate, the object detection algorithm is adopted to accurately identify the attribute of the object, the binocular vision is adopted to judge the height and the size of the object, the logic of the gate can be effectively judged by combining the object detection and binocular vision method, and then whether the passing behavior is normal or not is judged, so that the gate is controlled, the object clamping is prevented, the abnormal passing behavior is alarmed, the passengers and the staff are reminded, the safety of the gate and the passing feeling of the passengers are improved, and the safety operation and management of the rail transit are assisted.
Drawings
Fig. 1 is a schematic view of the apparatus structure of the present invention.
FIG. 2 is a flow chart of the object detection and recognition method of the present invention.
Detailed Description
The invention will be further described with reference to specific examples.
A gate passing behavior recognition and control method combining target detection and binocular vision comprises the following steps:
And 1, acquiring calibration pictures of targets in the gate by using binocular vision equipment at the top of the gate and black-and-white checkerboard calibration plates at the bottom of the gate.
And 2, calibrating the data of the calibration pictures acquired by the left camera and the right camera by using a calibration tool, and solving internal parameter matrixes of the left camera and the right camera.
And 3, carrying out three-dimensional calibration and alignment on the internal parameter matrix result to obtain an external parameter matrix of the binocular vision equipment.
Step 4: and correcting the images acquired by the binocular vision equipment by using the inner parameter matrixes of the left camera and the right camera and the outer parameter matrix of the binocular vision equipment, so that the polar lines of the images of the binocular cameras are parallel, and adjusting the corresponding polar lines to the same horizontal line.
Step 5: generating a parallax image of each frame of picture acquired by binocular vision equipment by utilizing a stereo matching algorithm; and (3) determining the region with the gray value changed obviously in the parallax map by scanning the parallax map, and marking the regions corresponding to the targets with different depths in the gate with label frames.
And 6, calculating the distance between the target and the camera according to the gray value of the target in the label frame, wherein the specific formula is as follows:
depth=(f*baseline)/disp
wherein depth represents the distance, f represents Jiao Ju f in the internal reference data matrix, baseline is the distance between the optical centers of the two cameras, called the baseline distance, determined by the physical position of the camera installation, disp is the gray value of each coordinate point in the disparity map.
Step 7: and calculating the distance between the center points of the two target tag frames through the coordinate information of the tag frames to obtain the distance between the two targets.
Step 8, collecting targets in the parallax map, marking the categories of the targets, and forming training samples by the targets and the categories of the targets, wherein the categories of the targets comprise: the method comprises the steps of training a target detection model by using a training sample, and detecting the type of targets in a region shot by binocular vision equipment by using the trained target detection model.
Step 9: when the object detection module identifies a pedestrian and simultaneously identifies baggage, the height H of the baggage is calculated, the distance D1 between the person and the baggage is calculated, if H > alpha, D1< beta 1, the passenger is judged to carry a large piece of baggage, and the passing behavior is normal passing. Wherein, alpha is the judgment threshold value of the size luggage, beta 1 is the threshold value of the distance between the pedestrian and the luggage, and the alpha and beta 1 values are adjustable.
When the target detection module identifies two pedestrians, the height of the two pedestrians is calculated, the distance D2 between the two pedestrians is calculated, if a person is judged to be an adult, a person is a child, the distance D2 is less than beta 2, the passengers are judged to carry the child, and the passing behavior is normal passing. Wherein β 2 is a threshold value of pedestrian-to-pedestrian distance.
When the object detection module identifies a pedestrian and does not identify luggage, the object detection module judges that the object detection module is a single passenger and the passing behavior is normal passing.
When the target detection module identifies two pedestrians, the heights of the two pedestrians are calculated, if the two pedestrians are judged to be adults, the distance D3 between the two pedestrians is calculated, and if the distance D3< beta 3, the following behavior is judged, and the passing behavior is abnormal passing. Beta 3 is the threshold for pedestrian-to-pedestrian distance.
When the target detection module identifies a pedestrian, but the gate door is not opened, and the pedestrian height is lower than the gate door height, the pedestrian is judged to be a tripping behavior, and the traffic behavior is abnormal traffic.
When the target detection module identifies a pedestrian, but the gate door is not opened, and the height of the pedestrian is higher than that of the gate door, the pedestrian is judged to be a jump-up behavior, and the traffic behavior is abnormal traffic.
After the card swiping action occurs, the target detection module identifies two pedestrians, the heights of the two pedestrians and the distance between the two pedestrians are calculated, if the two pedestrians are judged to be adults, the distance between the two pedestrians is changed from small to large, the two pedestrians are staggered, the pedestrians at the rear are judged to suddenly cross the previous pedestrians to enter the gate for illegal passing, and the passing action is abnormal passing.
Step 10: and sending the result of the judgment of the passing behavior to the gate, wherein the gate generates corresponding control, for normal passing behavior, according to the real-time intervals among pedestrians, objects and the gate, when the pedestrians and the objects are in the gate, the gate door of the gate cannot be closed, the pedestrians or the baggage is prevented from being clamped, and for abnormal passing behavior, corresponding audible and visual alarm information is played.
Example 1:
A gate passing behavior recognition and control method combining target detection and binocular vision comprises the following specific steps:
Step 1, adding a metal bracket beside a gate, and installing binocular vision equipment on the bracket at the top of the gate in a mode shown in figure 1. In this embodiment, a haisi 3559C chip is built in the binocular vision apparatus as a binocular vision and target detection computing chip.
And 2, in the specific implementation process, a black-and-white checkerboard calibration plate is used for collecting calibration pictures, and a calibration tool is used for calibrating data collected by the left camera and the right camera respectively, so that internal parameter matrixes of the left camera and the right camera are obtained. And (3) using a matlab calibration tool box calib-gui to extract corner points of the calibration plate pictures acquired by the left and right cameras, and independently calibrating the left and right cameras by adopting a Zhang Zhengyou calibration method to acquire the internal parameters of the left and right cameras, wherein the internal parameters are shown in tables 2 and 3.
TABLE 2
Internal parameters of camera | fx | fy | cx | cy |
Left camera | 965.30601 | 964.05115 | 643.31171 | 335.44134 |
Right camera | 968.75835 | 967.98767 | 642.84449 | 366.34843 |
Wherein: fx, fy represents Focal Length, cx, cy represents PRINCIPAL POINT, which together form the internal reference matrix K of the camera. The camera internal parameters are used to characterize the point on the camera coordinates, how to continue through the lens of the camera and become a pixel point through pinhole imaging and electronic conversion.
TABLE 3 Table 3
Distortion parameter | Kc_01 | Kc_02 | Kc_03 | Kc_04 | Kc_05 |
Left camera | -0.36967 | 0.13202 | 0.00102 | -0.00044 | 0.00000 |
Right camera | -0.39385 | 0.28220 | 0.00034 | -0.00339 | 0.00000 |
Wherein Kc_01, kc_02, kc_03, kc_04, and Kc_05 are the disfigurement coefficients. The distortion parameter is used for representing the actual pixel point, does not fall on the position where the theoretical calculation is carried out, and generates certain offset and deformation.
And 3, carrying out three-dimensional calibration and alignment on the internal parameter matrix results obtained after the left camera and the right camera are respectively calibrated to obtain an external parameter matrix of the camera. And (3) performing three-dimensional calibration on the acquired left and right camera internal parameters by using the stereo_ gui to obtain a rotation matrix R and a translation vector T, wherein the rotation matrix and the translation vector are called as camera external parameter matrices. The results are shown in Table 4.
TABLE 4 Table 4
Rec rotation vector | 0.03593 | -0.01109 | -0.00828 |
T translation vector | -66.18310 | 2.14988 | 0.71439 |
Wherein the Rec rotation vector obtained here needs to be obtained by Rodrigues transform in Opencv. The rotation matrix and the translation vector form camera external parameters used for representing that points on an object in real world coordinates fall on camera coordinates after rotation and translation.
And 4, correcting the images acquired by the binocular vision equipment by using the inner parameter matrixes of the left camera and the right camera, enabling the polar lines of the images of the binocular cameras to be parallel, and adjusting the corresponding polar lines to the same horizontal line. And generating a parallax image of each frame of picture acquired by the camera by utilizing a stereo matching algorithm. In the embodiment, binocular vision matching is performed by adopting a DPU module built in a Haishu 3559C chip, and a parallax image is obtained. And (3) determining the region with the gray value changed obviously in the parallax map by scanning the parallax map, and marking the regions corresponding to the targets with different depths in the gate with label frames.
Step 5: and carrying out target detection on each frame of image by using the trained target detection model, and obtaining the coordinate information of the detection target area.
Step 6: the coordinates of the detected target are transmitted into a parallax image to obtain corresponding parallax information, the parallax information is converted into the distance between the target and the camera by utilizing a parallax and distance conversion formula, and the distance calculation formula is as follows:
depth=(f*baseline)/disp
Where depth represents the distance, f represents the normalized focal length, that is, fx in the internal reference, baseline is the distance between the optical centers of the two cameras, called the baseline distance, determined by the physical location of the camera installation, disp is the gray value of each coordinate point in the disparity map.
As shown in fig. 2, step 7: labeling the attributes such as the height, the size, the category, the coordinates and the like of the targets in each frame of pictures by using the method described in the step 5 and the step 6, wherein the attribute judgment logic is shown in the figure 2, tracking and judging the behavior of each target after the target attributes of the continuous multi-frames are acquired, and identifying the traffic behavior of the target, wherein the type of the traffic behavior and the judging method are shown in the table 5. Wherein, the alpha value is set to 20cm, the beta 1 value is set to 10cm, and the beta 2 and the beta 3 values are set to 20cm.
If the category label is detected as the luggage and the size label exceeds 20cm, the luggage is judged to be large, and if the height label is smaller than 20cm, the luggage is judged to be small. If the category is detected to be not the luggage and the height label exceeds 150cm, the adult is judged, and if the height label is smaller than 150cm, the child is judged. And after the attributes of the targets of the continuous multiframes are acquired, tracking and judging the behavior of each target, and identifying the traffic behavior of the target.
TABLE 5
Step 8: and sending the result of the judgment of the passing behavior to a gate, wherein the gate generates corresponding control, and for normal passing behavior, the pedestrian or the luggage is prevented from being clamped by the coordinate positions of the pedestrian and the object in real time, and for abnormal passing behavior, corresponding audible and visual alarm information is played. In this embodiment, the binocular vision device communicates with the gate through an RS232 serial port.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Claims (4)
1. A gate passing behavior recognition and control method combining target detection and binocular vision is characterized in that: the method comprises the following steps:
When the object detection module identifies a pedestrian and simultaneously identifies baggage, the height H of the baggage and the distance D1 between the person and the baggage are calculated, if H > alpha, D1< beta 1, the passenger is judged to carry a large piece of baggage, and the passing behavior is normal passing; wherein, alpha is the judgment threshold value of the large and small baggage, and beta 1 is the threshold value of the distance between the pedestrian and the baggage;
When the target detection module identifies two pedestrians, the height of the two pedestrians is calculated, the distance D2 between the two pedestrians is calculated, if a person is judged to be an adult, a person is a child, and the distance D2 is smaller than beta 2, the passengers are judged to carry the child, and the passing behavior is normal passing; wherein, beta 2 is the threshold value of the distance between pedestrians;
When the object detection module identifies a pedestrian and does not identify luggage, judging that the passenger is a single passenger, and the passing behavior is normal passing;
when the target detection module identifies two pedestrians, calculating the heights of the two pedestrians, if the two pedestrians are judged to be adults, calculating the distance D3 between the two pedestrians, and if the distance D3< beta 3, judging that the pedestrians are trailing behaviors and the traffic behaviors are abnormal traffic; beta 3 is a threshold value of pedestrian-to-pedestrian distance;
when the target detection module identifies a pedestrian, but the gate door is not opened, and the pedestrian height is lower than the gate door height, determining that the pedestrian is a drill-down behavior, and determining that the traffic behavior is abnormal traffic;
When the target detection module identifies a pedestrian, but the gate door is not opened, and the height of the pedestrian is higher than that of the gate door, determining that the pedestrian is a jump-up behavior, and determining that the traffic behavior is abnormal traffic;
when the card swiping action occurs, the target detection module identifies two pedestrians, the heights of the two pedestrians and the distance between the two pedestrians are calculated, if the two pedestrians are judged to be adults, the distance between the two pedestrians is changed from small to large, the two pedestrians are staggered, the pedestrians behind suddenly pass through the previous one to enter the gate for illegal passing, and the passing action is abnormal passing;
the result of the judgment of the passing behavior is sent to the gate, the gate generates corresponding control, for normal passing behavior, according to the real-time intervals among pedestrians, objects and the gate, when the pedestrians and the objects are in the gate, the gate door of the gate is not closed, and for abnormal passing behavior, corresponding audible and visual alarm information is played;
The target detection module comprises the following acquisition steps:
Collecting targets in the parallax images, marking the categories of the targets, forming training samples by the targets and the categories of the targets, training a target detection model by using the training samples, and detecting the categories of the targets in the region shot by the binocular vision equipment by using the trained target detection model;
the parallax map acquisition steps are as follows:
The method comprises the steps that calibration pictures of targets are collected by using binocular vision equipment at the top of a gate and black-and-white checkerboard calibration plates at the bottom of the gate;
Calibrating the data of the calibration pictures acquired by the left camera and the right camera by using a calibration tool respectively, and solving internal parameter matrixes of the left camera and the right camera;
performing three-dimensional calibration and alignment on the internal parameter matrix result to obtain an external parameter matrix of the binocular vision equipment;
correcting images acquired by the binocular vision equipment by using the inner parameter matrixes of the left camera and the right camera, enabling polar lines of the images of the binocular cameras to be parallel, and adjusting corresponding polar lines to the same horizontal line;
generating a parallax image of each frame of picture acquired by binocular vision equipment by utilizing a stereo matching algorithm;
the target height and the distance between the targets are obtained by the following steps:
Obtaining the target height by calculating the difference value of the distances between the top end and the lower end of the tag frame, and obtaining the distance between two targets by calculating the distance between the center points of the two target tag frames through the coordinate information of the tag frame;
the label frame acquisition steps are as follows:
The method comprises the steps of scanning a parallax image, determining a region with a gray value which is obviously changed in the parallax image, and marking a label frame on the region corresponding to targets with different depths in a gate;
The distance between the tag frame and the camera is calculated as follows:
depth=(f*baseline)/disp
Wherein depth represents distance, f represents Jiao Ju in the internal reference data matrix, baseline is distance between optical centers of two cameras, disp is gray value of each coordinate point in the parallax map.
2. The method for identifying and controlling gate traffic behavior by combining object detection and binocular vision according to claim 1, wherein: the alpha value was set to 20cm, the beta 1 value was set to 10cm, and the beta 2 and beta 3 values were set to 20cm.
3. The method for identifying and controlling gate traffic behavior by combining object detection and binocular vision according to claim 1, wherein: the beta 2 value was set to 20cm.
4. The method for identifying and controlling gate traffic behavior by combining object detection and binocular vision according to claim 1, wherein: the beta 3 value was set to 20cm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058477.1A CN113781578B (en) | 2021-09-09 | 2021-09-09 | Gate passing behavior identification and control method combining target detection and binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058477.1A CN113781578B (en) | 2021-09-09 | 2021-09-09 | Gate passing behavior identification and control method combining target detection and binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781578A CN113781578A (en) | 2021-12-10 |
CN113781578B true CN113781578B (en) | 2024-05-28 |
Family
ID=78842165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111058477.1A Active CN113781578B (en) | 2021-09-09 | 2021-09-09 | Gate passing behavior identification and control method combining target detection and binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781578B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114596657B (en) * | 2022-02-10 | 2023-07-25 | 北京交通大学 | Gate passing system based on depth data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657581A (en) * | 2018-12-07 | 2019-04-19 | 南京高美吉交通科技有限公司 | Urban track traffic gate passing control method based on binocular camera behavioral value |
CN109657580A (en) * | 2018-12-07 | 2019-04-19 | 南京高美吉交通科技有限公司 | A kind of urban track traffic gate passing control method |
CN112669497A (en) * | 2020-12-24 | 2021-04-16 | 南京熊猫电子股份有限公司 | Pedestrian passageway perception system and method based on stereoscopic vision technology |
WO2021139176A1 (en) * | 2020-07-30 | 2021-07-15 | 平安科技(深圳)有限公司 | Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium |
CN113240829A (en) * | 2021-02-24 | 2021-08-10 | 南京工程学院 | Intelligent gate passing detection method based on machine vision |
-
2021
- 2021-09-09 CN CN202111058477.1A patent/CN113781578B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657581A (en) * | 2018-12-07 | 2019-04-19 | 南京高美吉交通科技有限公司 | Urban track traffic gate passing control method based on binocular camera behavioral value |
CN109657580A (en) * | 2018-12-07 | 2019-04-19 | 南京高美吉交通科技有限公司 | A kind of urban track traffic gate passing control method |
WO2021139176A1 (en) * | 2020-07-30 | 2021-07-15 | 平安科技(深圳)有限公司 | Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium |
CN112669497A (en) * | 2020-12-24 | 2021-04-16 | 南京熊猫电子股份有限公司 | Pedestrian passageway perception system and method based on stereoscopic vision technology |
CN113240829A (en) * | 2021-02-24 | 2021-08-10 | 南京工程学院 | Intelligent gate passing detection method based on machine vision |
Non-Patent Citations (1)
Title |
---|
基于双目立体视觉的倒车环境障碍物测量方法;刘昱岗;王卓君;王福景;张祖涛;徐宏;;交通运输系统工程与信息;20160815(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113781578A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107563347B (en) | Passenger flow counting method and device based on TOF camera | |
EP2426642B1 (en) | Method, device and system for motion detection | |
WO2023155483A1 (en) | Vehicle type identification method, device, and system | |
CN113870356B (en) | Gate passing behavior identification and control method combining target detection and binocular vision | |
CN101030256B (en) | Method and apparatus for cutting vehicle image | |
CN109657581B (en) | Urban rail transit gate traffic control method based on binocular camera behavior detection | |
CN112801074B (en) | Depth map estimation method based on traffic camera | |
CN102629326A (en) | Lane line detection method based on monocular vision | |
US20090309966A1 (en) | Method of detecting moving objects | |
CN102609724B (en) | Method for prompting ambient environment information by using two cameras | |
CN110231013A (en) | A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods | |
CN112329747B (en) | Vehicle parameter detection method based on video identification and deep learning and related device | |
CN106778668A (en) | A kind of method for detecting lane lines of the robust of joint RANSAC and CNN | |
CN113762009B (en) | Crowd counting method based on multi-scale feature fusion and double-attention mechanism | |
CN110189375A (en) | A kind of images steganalysis method based on monocular vision measurement | |
CN113781578B (en) | Gate passing behavior identification and control method combining target detection and binocular vision | |
CN114463303B (en) | Road target detection method based on fusion of binocular camera and laser radar | |
CN112836634B (en) | Multi-sensor information fusion gate anti-trailing method, device, equipment and medium | |
CN101739549A (en) | Face detection method and system | |
CN105957300B (en) | A kind of wisdom gold eyeball identification is suspicious to put up masking alarm method and device | |
CN109858456A (en) | A kind of rolling stock status fault analysis system | |
CN202058221U (en) | Passenger flow statistic device based on binocular vision | |
CN115497073A (en) | Real-time obstacle camera detection method based on fusion of vehicle-mounted camera and laser radar | |
CN111753781B (en) | Real-time 3D face living body judging method based on binocular infrared | |
CN104063689A (en) | Face image identification method based on binocular stereoscopic vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |