WO2021004312A1 - Procédé de mesure intelligente de trajectoire de véhicule basé sur un système de vision stéréoscopique binoculaire - Google Patents
Procédé de mesure intelligente de trajectoire de véhicule basé sur un système de vision stéréoscopique binoculaire Download PDFInfo
- Publication number
- WO2021004312A1 WO2021004312A1 PCT/CN2020/098769 CN2020098769W WO2021004312A1 WO 2021004312 A1 WO2021004312 A1 WO 2021004312A1 CN 2020098769 W CN2020098769 W CN 2020098769W WO 2021004312 A1 WO2021004312 A1 WO 2021004312A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- vision system
- matching
- point
- binocular stereo
- Prior art date
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 16
- 238000005259 measurement Methods 0.000 claims abstract description 41
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims abstract description 29
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 238000013528 artificial neural network Methods 0.000 claims abstract description 22
- 238000012216 screening Methods 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 10
- 230000003287 optical effect Effects 0.000 claims description 24
- 230000009466 transformation Effects 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000011160 research Methods 0.000 claims description 3
- 238000000844 transformation Methods 0.000 claims 1
- 238000009434 installation Methods 0.000 abstract description 4
- 238000002474 experimental method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
- G01C11/12—Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/285—Analysis of motion using a sequence of stereo image pairs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
- G08G1/054—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/056—Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- Binocular stereo vision system is a very classic vision system in the field of machine vision. It uses two cameras to obtain a pair of video images with a certain parallax. By calculating the difference between the two images, the object can be obtained in the real three-dimensional space. Some status. Commonly used radar, laser and other speed measurement methods need to destroy the road embedded coils and cannot measure all vehicle targets in the field of view. They cannot complete the measurement of vehicle trajectory, lane change, and steering status on undulating road sections and turning sections, and their application range is small.
- the present invention proposes a vehicle intelligent speed measurement method based on a binocular stereo vision system.
- the speed measurement using the binocular stereo vision system has detection privacy and does not need to destroy the road.
- Buried coils can measure all vehicle targets in the field of view at the same time, and can complete the measurement of vehicle trajectory, lane change, and steering status on undulating road sections and turning sections.
- a method for intelligent vehicle trajectory measurement based on a binocular stereo vision system the steps are as follows:
- Step 1 Input a data set composed of images with license plates into the SSD neural network, and train the SSD neural network as a detection feature to obtain a license plate recognition model;
- Step 2 Set up the binocular stereo vision system on the right, middle or upper side of the lane, calibrate the binocular stereo vision system to obtain the own parameters and external parameters of the two cameras; use the calibrated binocular stereo vision system to shoot moving pictures Video of the target vehicle;
- Step 3 Use the license plate recognition model trained in Step 1 to detect the license plate of the video frame obtained in Step 2, and locate the license plate position of the target vehicle;
- Step 4 Use the feature-based matching algorithm to extract feature points and stereo match the license plate positions in the front and rear frame images of the same camera, and retain the correct matching points after filtering through the homography matrix; use the feature-based matching algorithm to perform binocular stereo matching The vision system performs feature point extraction and stereo matching for the corresponding license plates in the corresponding video frames of the left and right roads, and retains the correct matching points after filtering by the homography matrix;
- Step 5 Screen the matching point pairs reserved in Step 4, and then use the binocular stereo vision system to eliminate the matching points after screening, and keep the coordinates of the matching point closest to the center of the license plate as the target in the current frame The location of the vehicle;
- Step 6 Use the binocular stereo vision system to perform stereo measurement on the matched point pairs after screening, obtain the position of the vehicle in the video frame under the spatial coordinates, and generate the running track of the vehicle in chronological order.
- the SSD neural network in the first step is based on the classic SSD neural network, the conv11_2 is removed, and the conv4_3, conv7, conv8_2, conv9_2, and conv10_2 features are used to convert the conv4_3, conv7, conv8_2, conv9_2, and The feature information of different scales extracted by conv10_2 is fused and input to the classifier, and the location of the license plate is predicted with the feature map output by the conv layer.
- the data set in the first step includes the BIT-Vehicle data set provided by Beijing Institute of Technology, the open license plate database provided by the OpenITS research project hosted by the Guangdong Key Laboratory of Intelligent Transportation Systems, and 1,000 vehicle license plate images taken by itself, totaling 11,000 Image.
- the binocular stereo vision system includes two cameras and a host computer. Both cameras use flea2 industrial cameras.
- the two cameras are a left-eye camera and a right-eye camera. Both the left-eye camera and the right-eye camera are connected to the host computer.
- the method for calibrating the binocular stereo vision system in the second step is to calibrate the two cameras using the Zhang Zhengyou calibration method to obtain the respective optical center coordinates, focal lengths, scale factors and/or lens distortion parameters of the two cameras; After obtaining the parameters of the two cameras, the binocular stereo vision system is calibrated using the Zhang Zhengyou calibration method. With the left-eye camera as the benchmark, the displacement and rotation angle of the right-eye camera relative to the left-eye camera are obtained through calibration.
- T1 (l, m, n)
- V relative rotation vector
- l, m, n respectively indicate that the right-eye camera is in the x, y, and z directions relative to the left-eye camera
- the translation distance of ⁇ , ⁇ , and ⁇ respectively represent the rotation angle of the right-eye camera relative to the left-eye camera around the three axes of x, y, and z;
- the external parameters of the camera and the convergence point of the binocular stereo vision system are:
- B is the baseline length of the binocular camera, and ⁇ represents the angle between the optical axes of the two cameras;
- the imaging points of the same spatial point in the two cameras are called the left and right corresponding points, respectively.
- the left and right corresponding points are the optical axis of the left-eye camera and the right-eye camera and their respective images.
- the intersection point of the plane; a'and b' are the pixel difference between the left corresponding point, the right corresponding point, and the convergence point in the u direction of the image coordinate system, and: if the left corresponding point or the right corresponding point is on the left of the convergence point, then Difference ⁇ 0, on the contrary, difference>0;
- the optical axis is perpendicular to the respective image plane.
- the connection line from the optical center to the target point is called the corresponding circle.
- the calculation formula for the angle a, b between the corresponding circle and the optical axis is:
- the target angle c is expressed as:
- the world coordinate of y can be obtained;
- the world coordinate of the target point P is obtained as:
- v′ represents the pixel difference between the target point in the longitudinal direction of the image coordinate system and the image center point
- f l is the focal length of the left-eye camera
- the feature-based matching algorithm is the SURF feature extraction and matching algorithm, which uses the SURF descriptor to describe the local features of the video image; the homography matrix filtering description is for the same thing in two images taken from different perspectives.
- x', y', 1 and x, y, 1 respectively represent the coordinates of the two corresponding points before and after the perspective transformation
- h 11-32 are the required transformation parameters
- x′ i1 , y′ i1 , 1 and x i1 , y i1 , 1 respectively represent the coordinates of the matching points before and after the perspective transformation
- t is the Euclidean distance threshold
- i1 1, 2, 3, 4; Small indicates that the matching accuracy of the two corresponding matching points is higher.
- the screening method in step 5 is: taking a circle with the center point of the area as the center and the diameter of the area in the license plate area in the video frame of the left-eye camera, and the corresponding frame in the video of the corresponding right-eye camera Take the center of the matching area as the center of the circle and take the equal-sized circular area, and remove the matching points that are not included in the two circles at the same time.
- the fifth method step a ranging binocular stereo vision system is as follows: calculate the distance d i N matching of all points, the average value ⁇ and standard deviation [sigma], and Z I Z-score is calculated for each matching point:
- the direction vector of the movement
- ⁇ 1 represents the steering angle of the vehicle
- ⁇ 1>0 indicates that the vehicle turns to the left
- ⁇ 1 ⁇ 0 indicates that the vehicle turns to the right.
- the beneficial effects of the present invention use the binocular stereo vision system as the vehicle video acquisition equipment, use the trained SSD neural network to automatically identify and locate the vehicle position, and use the image matching algorithm to track and stereo match the same target in the binocular stereo video Finally, the space position of the vehicle is detected by the binocular stereo vision system, and the running track is generated in chronological order.
- the binocular stereo vision system is easy to install and debug, and can automatically recognize a variety of trained characteristics at the same time, which can better meet the development needs of future intelligent transportation networks and the Internet of Things.
- FIG. 1 is a flowchart of the present invention.
- Figure 2 shows the structure of the SSD neural network.
- Figure 3 is a schematic diagram of the convergent binocular stereo vision system structure.
- Figure 4 is a schematic diagram of the transformation model of the target included angle.
- Figure 5 is a schematic diagram of the depth calculation model of the target point.
- Fig. 6 is a schematic diagram of the calculation method of the y value, in which (a) the position of the target point relative to the center point on the image plane, and (b) represents the relative position of the target point and the optical axis of the camera in real space.
- Figure 7 is a flowchart of the SURF feature extraction and matching algorithm.
- Fig. 8 is a flowchart of monocular video frame feature point matching and binocular stereo matching.
- Figure 9 shows the effect of license plate tracking in a single-channel video, where (a) is the previous frame image, (b) is the matching result of the white license plate in the previous frame and the subsequent frame image, (c) is (b) using homography The matching effect after matrix filtering.
- Figure 10 is a schematic diagram of further screening of matching points.
- Figure 11 is a schematic diagram of the vehicle trajectory projected onto the XOY plane
- Figure 12 is the first group of experimental three-dimensional target vehicles located 15 meters away from the license plate target extracted by the SSD, in which (a) is the left-eye video image, (b) is the right-eye video image.
- Figure 13 shows the license plate target extracted by the SSD in the first set of experiments where the target vehicle is located at a distance of 1 meter, in which (a) is the left-eye video image, and (b) is the right-eye video image.
- Figure 14 is a schematic diagram of the first set of experiments for matching the license plate regions of the video frames corresponding to the left and right eyes, (a) is the left-eye video image, and (b) is the right-eye video image.
- Fig. 15 shows the running trajectories of the first group of test vehicles, in which (a) is the three-dimensional trajectory of the first group of test vehicles, and (b) the two-dimensional projection of the trajectory of the first group of test vehicles on the XOY plane.
- Figure 16 shows the screenshots corresponding to the left and right video parts of the second set of steering experiments (1 frame for every 3 frames) and the license plate detection results, where (a1) is the left-eye video image 1, (a2) is the left-eye video image 2, (a3) Is left-eye video image 3, (b1) is right-eye video image 1, (b2) is right-eye video image 2, and (b3) is right-eye video image 3.
- Figure 17 is the second set of steering test vehicle trajectories, where (a) is the three-dimensional trajectory of the vehicle, and (b) is the two-dimensional projection of the vehicle trajectory on the XOY plane.
- Fig. 18 is a comparison between the measured trajectory of the present invention and the GPS trajectory, where (a) is the trajectory measured by the system of the present invention, and (b) is the comparison of the measured trajectory of the present invention with the GPS trajectory.
- Figure 19 is a screenshot of the video taken by two cars driving in opposite directions and the result of the license plate detection. Among them, (a) is the left-eye video image 1, (b) is the right-eye video image 1, (c) is the left-eye video image 2, (d) For the right eye video image 2.
- Figure 20 is a diagram of the trajectory measured in the experiment of two cars driving towards each other, in which (a) the three-dimensional trajectory of the two cars, (b) the projected trajectory of the two cars on the XOY plane.
- Step 1 Input the image with the license plate in the public traffic monitoring video into the SSD neural network, and the license plate is used as the detection feature to train the SSD neural network to obtain the license plate recognition model.
- the license plate be used as a detection feature of the target vehicle, but also features such as vehicle logo, wheels, windows, and mirrors can be added to further improve the accuracy of license plate target detection. In subsequent applications, it is used for target recognition of violating vehicles.
- the detection network used in the present invention removes the conv11_2 convolutional layer on the basis of the classic SSD neural network, and uses conv4_3, conv7, conv8_2, conv9_2, and conv10_2 to combine the convolutional layers
- the extracted feature information of different scales is fused and input into the classifier, and the position of the license plate is predicted with the feature images output by these layers.
- the invention uses multiple data sets to train the neural network.
- the data sets used for neural network training and detection include the use of the BIT-Vehicle data set provided by Beijing Institute of Technology, and the OpenITS research plan hosted by the Key Laboratory of Intelligent Transportation System of Guangdong Province Provided license plate open database (http://www.openits.cn/) and 1,000 vehicle license plate pictures taken by the team of the present invention, a total of 11,000 images, training SSD neural network, automatic identification of license plate targets in traffic monitoring videos And positioning.
- the license plate recognition model trained by the SSD neural network can accurately recognize the license plate of each frame of the video.
- Step 2 Set up the binocular stereo vision system on the right, middle or above the lane, use the binocular stereo vision system to calibrate to obtain the own and external parameters of the two cameras; use the calibrated binocular stereo vision system to shoot motion Video of the target vehicle.
- Two flea2 industrial cameras from POINT GRAY and a laptop are used to build a binocular stereo vision system.
- the two cameras simultaneously shoot the measurement area and use a USB data cable to communicate with the laptop.
- the laptop used is equipped with a Core i7CPU, 8G memory, NVIDIA Geforce 830M discrete graphics card and solid-state hard drive.
- the binocular camera of the binocular stereo vision system is calibrated to obtain the internal and external parameters of the binocular camera.
- the invention uses Zhang Zhengyou calibration method to calibrate the binocular camera to obtain the respective optical center coordinates, focal length, scale factor, lens distortion and other parameters of the two cameras.
- the binocular stereo vision system is calibrated by Zhang Zhengyou's calibration method.
- the left-eye camera is used as the benchmark to obtain the displacement and rotation of the right-eye camera relative to the left-eye camera through calibration.
- the camera system In actual measurement, the camera system must be calibrated every time the camera position is changed to ensure the accuracy of the measurement.
- the binocular stereo vision system After obtaining the internal and external parameters of the camera system, the binocular stereo vision system will be used for distance measurement and trajectory measurement.
- the calibrated binocular stereo vision system is used to shoot moving vehicles. There are certain differences in the shooting areas of the left and right camera video images, and the shooting angles are slightly different, and the two images have a certain parallax.
- the three-dimensional measurement principle of the binocular stereo vision system uses the left-eye camera as the main camera. After the calibration operation of Zhang Zhengyou calibration method, the relative translation vector T1 of the right-eye camera in the binocular stereo vision system can be obtained.
- the baseline length B that is, the distance between the two-lens cameras and the angle ⁇ between the optical axes between the two-lens cameras, the camera external parameters and the convergence point of the binocular stereo vision system can be obtained, as shown in FIG. 3.
- the convergence point of the external camera parameters and the binocular stereo vision system is:
- the target included angle c and the depth information of the target object can be calculated.
- the transformation model of the target angle is shown in Figure 4.
- the imaging points of the same spatial point in the two cameras are called the left corresponding point LCP and the right corresponding point RCP respectively.
- LPP and RPP are the intersection points of the optical axes of the left and right eye cameras and the respective image planes respectively.
- a'and b' are the pixel difference between the corresponding point and the main point (convergence point) in the u direction respectively, and stipulate that if the corresponding point is to the left of the main point, the difference is ⁇ 0, otherwise the difference is >0.
- the optical axis is perpendicular to the respective image plane.
- the connection line from the optical center to the target point is called the corresponding circle.
- the angle a, b between the optical axis and the optical axis can be calculated by the following formula:
- f l and fr represent the focal lengths of the two cameras.
- the target angle c is expressed as:
- the world coordinate of x can be calculated as above.
- the projection point of the target point to be measured in the left camera through the mapping relationship is called the left corresponding point LCP
- the left principal point LPP is the intersection point of the optical axis of the left camera and the two-dimensional imaging surface.
- the pixel difference between the left corresponding point LCP and the left principal point LPP in the v direction is v'
- f l is the focal length of the left camera.
- the world coordinates of the target point P can be obtained as:
- the world coordinates of the target point in area II, area III and area IV can be obtained.
- invention patent CN 107705331 A a method for vehicle video speed measurement based on multi-view cameras. Further calculation The distance between the target point and the camera (that is, the center of the left camera sensor) can be obtained.
- Step 3 Use the license plate recognition model trained in Step 1 to detect the license plate of the video frame obtained by the calibrated binocular stereo vision system to locate the license plate position of the target vehicle.
- step one Use SSD neural network algorithm to extract license plate targets, mark target frames with regular outlines, and provide faster processing speed while ensuring detection accuracy, which can meet the needs of rapid detection and positioning of targets in videos.
- target detection is performed on the video frames obtained by the calibrated camera in step two, so as to locate the license plate target.
- it is not necessary to perform accurate target detection on each frame of image in the measurement area, and only need more than two pairs of accurately detected image frames to complete the detection of vehicle trajectory and steering state.
- Step 4 Use the feature-based matching algorithm to extract and match the feature points of the license plate position in the front and rear frame images of the same camera, filter through the homography matrix to ensure the correct vehicle tracking; use the feature-based matching algorithm for the binocular stereo vision system Feature point extraction and stereo matching are performed on the corresponding license plates in the corresponding video frames of the left and right roads, and the correct matching points are retained for stereo measurement after filtering by the homography matrix.
- the two-dimensional image matching algorithm used in the present invention is a feature-based matching algorithm. According to the characteristics of the image, it can be divided into point, line (edge), surface and other features to generate feature descriptors, and finally compare the similarity of the descriptors, thereby Realize the matching between the corresponding features of the two video images.
- the surface feature extraction is troublesome, and the calculation amount is large and time-consuming.
- the SURF feature is used for video image feature extraction and matching.
- the SURF descriptor describes the local features of the video image. When the video image is rotated, translated, and scaled, the SURF feature extraction and matching algorithm have good stability.
- the SURF feature extraction and matching algorithm is divided into the following parts: 1. Extract key points, try to select points that are not affected by light changes, such as corner points, edge points, bright spots in dark areas, and bright areas. Dark spots; 2. Extract detailed local feature vectors for these key points; 3. Compare the feature vectors of the template image and the target image pair by pair to find the matching point pair with the best mutual matching, so as to achieve the matching of the two images .
- the SURF algorithm is used to match the detected license plates in a single video to achieve independent tracking of multiple targets, and then the corresponding video frames in the left and right videos are matched to extract the corresponding feature points. Measure in stereo. For example, take the white vehicle license plate in Figure 9 (a) as the tracking target, and use the SURF feature extraction and matching algorithm to match the second image to accurately locate the location of the same license plate, as shown in Figure 9 (b) Shown in the dashed box.
- the matching result will not be 100% accurate. If the image is not clear enough or there is an area different from the matching template, wrong matching points will occur. If there are wrong matching points, it will have a great impact on the accuracy of vehicle tracking and the results of stereo measurement. Therefore, for the matching result, the wrong matching points need to be eliminated.
- the homography matrix is used to describe the relationship between two images taken at different perspectives for the same thing. Assuming that there is a perspective transformation between the two images, the homography matrix, which is the perspective transformation matrix H, is defined as follows:
- x', y', 1 and x, y, 1 respectively represent the coordinates of the two corresponding points before and after the perspective transformation
- h 11-32 are the required transformation parameters
- the homography matrix H is calculated, and then the homography matrix H with the largest number of interior points (ie exact matching points) is selected as the correct result; in order to verify the homography
- the method of calculating the Euclidean distance between corresponding matching points after perspective transformation is as follows:
- x′ i1 , y′ i1 , 1 and x i1 , y i1 , 1 respectively represent the coordinates of the matching points before and after the transformation
- t is the Euclidean distance threshold
- i1 1, 2, 3, 4. The smaller the distance, the higher the matching accuracy of the two matching points.
- the matching points are extracted using the SURF feature extraction and matching algorithm. After filtering by the homography matrix, the correct matching points are retained.
- Step 5 Carry out further screening of the matching point pairs reserved in step 4, and then use the binocular stereo vision system to eliminate the selected matching points, and reserve the coordinates of the matching point closest to the center of the license plate as the target in the current frame The location of the vehicle.
- the remaining matching points are further screened.
- the license plate area in the left-eye camera video frame take a circle with the center of the area as the center and the area height as the diameter, and in the corresponding frame of another video, take the center of the matching area as the center and take the same size
- the matching points that are not included in the two circles at the same time are eliminated.
- the two pairs of matching points connected by the solid line are within the circular area and matched correctly. Keep them, while the matching points connected by the dotted line are not in the corresponding circle on the right license plate. In the area, it is removed.
- the matching point indicated by the solid line closest to the center point of the license plate is selected as the target position of the stereo detection.
- the distance d i are calculated for all N matching positions, the average value ⁇ and the standard deviation [sigma], and calculates the score of each point Z Z i:
- Step 6 Use the binocular stereo vision system to perform stereo measurement on the matched point pairs after screening, obtain the position of the vehicle in each frame under the spatial coordinates, and obtain the trajectory of the vehicle in time sequence.
- the three-dimensional running trajectory of the vehicle can be obtained according to the time sequence. According to the difference of the coordinates of the two points, the direction vector of the vehicle movement between the two points can be calculated:
- the 3D trajectory of the vehicle is projected to the XOY plane, that is, the vertical coordinate is removed, as shown in Figure 11, where ⁇ 'and ⁇ 'indicate the vehicle running from the first point to the second point.
- ⁇ 'and ⁇ ' indicate the vehicle running from the first point to the second point.
- ⁇ 1 represents the steering angle of the vehicle.
- ⁇ 1>0 it means that the vehicle is turning to the left.
- ⁇ 1 ⁇ 0 it means that the vehicle is turning to the right. .
- the device uses GPS+GLONASS dual satellite positioning system data for speed measurement, communicates with the mobile phone through the Bluetooth 4.0 chip, and exchanges 10HZ high-frequency GPS data to ensure measurement accuracy.
- the measurement error is 2%.
- the device displays real-time data through the mobile app and records the running track of each test.
- the vehicle travels in a straight line at a constant speed.
- the binocular stereo vision system equipment is erected on the right side of the lane, maintaining an angle of about 20 degrees with the lane.
- the video resolution is 1288 ⁇ 964 and the frame rate is 30f/s.
- step five calculate the turning state of the vehicle with every 3 frames as a time node, that is, measure 10 times per second, and draw the steering table as shown in Table 1.
- Figure 15 (a) is the three-dimensional reconstruction of the vehicle's trajectory, and (b) is the trajectory projection on the XOY plane. From these two figures, it can be seen that the vehicle maintains a straight line during the measurement process. It can be seen from Table 1 that due to practical problems such as uneven road surface during actual driving, the vehicle turned slightly, and the measured maximum steering angle was only 1.75 degrees, but the overall operation remained straight. It can be seen from the above experiments that the present invention has a better trajectory measurement ability under the condition that the vehicle moves in a straight line at a constant speed.
- Figure 18 (a) shows the vehicle running trajectory chart recorded by the satellite speed measuring equipment, in which the upper side is the north side, the line represents the vehicle running trajectory, and the dot represents the camera installation position. The upper point of the line is the starting point, and the lower point is the end point. The line south of the camera setting point has left the shooting range of the device. Among them, the five-pointed star represents the three-dimensional space position of the center point of the license plate in the current frame, where the upper left corner is the starting point and the lower right corner is the end point. It can be seen from Figure 18 (b) that the vehicle trajectory is basically consistent with the satellite recorded trajectory.
- Table 2 shows the measurement results of the vehicle steering angle.
- ⁇ >0 it means that the vehicle is turning to the left
- ⁇ 0 it means that the vehicle is turning to the right.
- Figure 18 and Table 2 it can be seen from Figure 18 and Table 2 that the vehicle first turned 4.2 degrees to the left, then continued to turn left but the angle gradually decreased until it went straight, and then turned to the right.
- the maximum turning point is 6.2 degrees to the right and keep turning right. Until it leaves the shooting field of view. Compared with the satellite track record, it can be seen that the steering effect of the present invention is stable and reliable.
- the two vehicles are driving towards each other, and the binocular stereo vision system is set up in the middle of the two lanes, with the vehicle on the left from far to near, and the vehicle on the right from near to far.
- vehicle speedometer as a reference, drive two vehicles to drive straight at a maximum speed of 30km/h, as shown in Figure 19.
- a satellite speed measurement device was set up in each of the two vehicles, and the results were used as a comparison. Then the vehicle trajectory is reconstructed, and the result is shown in Figure 20. It can be seen from Fig. 20 that the two cars are driving toward each other, basically keeping straight and parallel to each other, which conforms to the route condition in the experimental design.
- the binocular stereo vision system has better stability and reliability in actual vehicle trajectory measurement applications.
- the binocular stereo vision system has a high degree of intelligence and strong expandability. It can independently complete functions such as video capture, vehicle recognition, and trajectory detection without the assistance of other equipment; binocular stereo ranging technology It belongs to passive ranging, that is, the binocular stereo vision system does not actively radiate any signals or rays, which is safer, has low energy consumption and will not affect human health or interfere with other electronic equipment; there is no need to limit the installation angle, and there is no need to be vertical or parallel to The running direction of the target object is easy to install and debug; it can realize the simultaneous measurement of multi-lane, multi-target, and multi-directional vehicles.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé de mesure intelligente de trajectoire de véhicule basé sur un système de vision stéréoscopique binoculaire. Le procédé consiste à : entrer un ensemble de données dans un réseau neuronal SSD pour obtenir un modèle de reconnaissance de plaque d'immatriculation ; étalonner un système de vision stéréoscopique binoculaire, et prendre une vidéo d'un véhicule cible mobile ; effectuer une détection de plaque d'immatriculation sur une image vidéo au moyen du modèle de reconnaissance de plaque d'immatriculation ; exécuter une mise en correspondance stéréoscopique des positions d'une plaque d'immatriculation dans des trames précédentes et ultérieures de la même caméra et d'images de trame vidéo gauche et droite correspondantes au moyen d'un algorithme de mise en correspondance basé sur des caractéristiques, et réserver des points de correspondance corrects après filtrage par une matrice d'homographie ; cribler des paires de points de correspondance réservés et éliminer certaines des paires, de manière à réserver un point de correspondance le plus proche du centre de la plaque d'immatriculation en tant que position du véhicule cible dans la trame courante ; et effectuer une mesure stéréoscopique des paires de points de correspondance criblés pour acquérir la position, en coordonnées spatiales, du véhicule dans la trame vidéo, et générer une trajectoire de déplacement du véhicule conformément à une séquence temporelle. La présente invention permet une installation et un débogage simples, et permet d'exécuter la mesure simultanée de véhicules sur des voies multiples, des cibles multiples et dans des sens multiples.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/422,446 US11900619B2 (en) | 2019-07-08 | 2020-06-29 | Intelligent vehicle trajectory measurement method based on binocular stereo vision system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910608892.6 | 2019-07-08 | ||
CN201910608892.6A CN110285793B (zh) | 2019-07-08 | 2019-07-08 | 一种基于双目立体视觉系统的车辆智能测轨迹方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021004312A1 true WO2021004312A1 (fr) | 2021-01-14 |
Family
ID=68021957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/098769 WO2021004312A1 (fr) | 2019-07-08 | 2020-06-29 | Procédé de mesure intelligente de trajectoire de véhicule basé sur un système de vision stéréoscopique binoculaire |
Country Status (3)
Country | Link |
---|---|
US (1) | US11900619B2 (fr) |
CN (1) | CN110285793B (fr) |
WO (1) | WO2021004312A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327192A (zh) * | 2021-05-11 | 2021-08-31 | 武汉唯理科技有限公司 | 一种通过三维测量技术测算汽车行驶速度的方法 |
CN113654509A (zh) * | 2021-07-28 | 2021-11-16 | 北京交通大学 | 轮轨接触姿态测量的检测布局控制方法及装置、介质 |
CN114155290A (zh) * | 2021-11-18 | 2022-03-08 | 合肥富煌君达高科信息技术有限公司 | 一种用于大视场高速运动测量的系统与方法 |
US11529258B2 (en) | 2020-01-23 | 2022-12-20 | Shifamed Holdings, Llc | Adjustable flow glaucoma shunts and associated systems and methods |
CN116953680A (zh) * | 2023-09-15 | 2023-10-27 | 成都中轨轨道设备有限公司 | 一种基于图像的目标物实时测距方法及系统 |
CN118378122A (zh) * | 2024-03-26 | 2024-07-23 | 安徽交控信息产业有限公司 | 一种车型识别设备的部署方法及系统 |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10691968B2 (en) | 2018-02-08 | 2020-06-23 | Genetec Inc. | Systems and methods for locating a retroreflective object in a digital image |
CN110285793B (zh) * | 2019-07-08 | 2020-05-15 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测轨迹方法 |
CN110322702B (zh) * | 2019-07-08 | 2020-08-14 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测速方法 |
EP4022590A4 (fr) * | 2019-10-26 | 2022-12-28 | Genetec Inc. | Système automatisé de reconnaissance de plaque d'immatriculation et procédé associé |
CN110807934A (zh) * | 2019-11-13 | 2020-02-18 | 赵洪涛 | 高速公路运营监控智能调度平台 |
CN110956336B (zh) * | 2019-12-13 | 2023-05-30 | 中海服信息科技股份有限公司 | 一种基于图算法的行车路线挖掘方法 |
CN111079751B (zh) * | 2019-12-16 | 2021-02-26 | 深圳市芊熠智能硬件有限公司 | 识别车牌真伪的方法、装置、计算机设备及存储介质 |
CN111354028B (zh) * | 2020-02-19 | 2022-05-31 | 山东大学 | 基于双目视觉的输电通道隐患物识别追踪方法 |
US11830160B2 (en) * | 2020-05-05 | 2023-11-28 | Nvidia Corporation | Object detection using planar homography and self-supervised scene structure understanding |
CN111891124B (zh) * | 2020-06-08 | 2021-08-24 | 福瑞泰克智能系统有限公司 | 目标信息融合的方法、系统、计算机设备和可读存储介质 |
CN113822930B (zh) * | 2020-06-19 | 2024-02-09 | 黑芝麻智能科技(重庆)有限公司 | 用于高精度定位停车场中的物体的系统和方法 |
CN111914715B (zh) * | 2020-07-24 | 2021-07-16 | 廊坊和易生活网络科技股份有限公司 | 一种基于仿生视觉的智能车目标实时检测与定位方法 |
CN111985448A (zh) * | 2020-09-02 | 2020-11-24 | 深圳壹账通智能科技有限公司 | 车辆图像识别方法、装置、计算机设备及可读存储介质 |
CN112070803A (zh) * | 2020-09-02 | 2020-12-11 | 安徽工程大学 | 一种基于ssd神经网络模型的无人船路径跟踪方法 |
CN112489080A (zh) * | 2020-11-27 | 2021-03-12 | 的卢技术有限公司 | 基于双目视觉slam的车辆定位及车辆3d检测方法 |
CN112365526B (zh) * | 2020-11-30 | 2023-08-25 | 湖南傲英创视信息科技有限公司 | 弱小目标的双目检测方法及系统 |
EP4036856A1 (fr) * | 2021-02-02 | 2022-08-03 | Axis AB | Mise à jour de points annotés dans une image numérique |
CN113129449B (zh) * | 2021-04-16 | 2022-11-18 | 浙江孔辉汽车科技有限公司 | 一种基于双目视觉的车辆路面特征识别及三维重建方法 |
CN113332110B (zh) * | 2021-06-02 | 2023-06-27 | 西京学院 | 一种基于景物听觉感知的导盲手电及导盲方法 |
CN114119759B (zh) * | 2022-01-28 | 2022-06-14 | 杭州宏景智驾科技有限公司 | 多位置机动车定位方法和装置、电子设备和存储介质 |
CN114495509B (zh) * | 2022-04-08 | 2022-07-12 | 四川九通智路科技有限公司 | 基于深度神经网络监控隧道运行状态的方法 |
CN114842091B (zh) * | 2022-04-29 | 2023-05-23 | 广东工业大学 | 一种双目鸡蛋尺寸流水线测定方法 |
CN115166634B (zh) * | 2022-05-18 | 2023-04-11 | 北京锐士装备科技有限公司 | 一种多手段结合的无人机飞手定位方法及系统 |
US20230394806A1 (en) * | 2022-06-07 | 2023-12-07 | Hefei University Of Technology | Quickly extraction of morphology characterization parameters of recycled concrete sand particles based on deep learning technology |
CN114758511B (zh) * | 2022-06-14 | 2022-11-25 | 深圳市城市交通规划设计研究中心股份有限公司 | 一种跑车超速检测系统、方法、电子设备及存储介质 |
CN114897987B (zh) * | 2022-07-11 | 2022-10-28 | 浙江大华技术股份有限公司 | 一种确定车辆地面投影的方法、装置、设备及介质 |
CN115823970B (zh) * | 2022-12-26 | 2024-07-12 | 浙江航天润博测控技术有限公司 | 一种视觉弹丸轨迹生成系统 |
CN115775459B (zh) * | 2023-02-13 | 2023-04-28 | 青岛图达互联信息科技有限公司 | 一种基于智能图像处理的数据采集系统及方法 |
CN116071566A (zh) * | 2023-03-23 | 2023-05-05 | 广东石油化工学院 | 基于网格流去噪和多尺度目标网络的钢桶轨迹检测方法 |
TWI839241B (zh) * | 2023-06-06 | 2024-04-11 | 台達電子工業股份有限公司 | 用於視角轉換的影像處理裝置以及方法 |
CN117612364A (zh) * | 2023-10-13 | 2024-02-27 | 深圳市综合交通与市政工程设计研究总院有限公司 | 一种机动车的违章检测方法、装置、电子设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877174A (zh) * | 2009-09-29 | 2010-11-03 | 杭州海康威视软件有限公司 | 车速测量方法、监控机及车速测量系统 |
CN103177582A (zh) * | 2013-04-22 | 2013-06-26 | 杜东 | 一种视频测速和车牌识别的一体机 |
CN103473926A (zh) * | 2013-09-11 | 2013-12-25 | 无锡加视诚智能科技有限公司 | 枪球联动道路交通参数采集及违章抓拍系统 |
JP2015064752A (ja) * | 2013-09-25 | 2015-04-09 | 株式会社東芝 | 車両監視装置、および車両監視方法 |
WO2018125939A1 (fr) * | 2016-12-30 | 2018-07-05 | DeepMap Inc. | Odométrie visuelle et alignement par paires pour la création d'une carte haute définition |
CN110285793A (zh) * | 2019-07-08 | 2019-09-27 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测轨迹方法 |
CN110322702A (zh) * | 2019-07-08 | 2019-10-11 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测速方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110064197A (ko) * | 2009-12-07 | 2011-06-15 | 삼성전자주식회사 | 물체 인식 시스템 및 그 물체 인식 방법 |
US20160232410A1 (en) * | 2015-02-06 | 2016-08-11 | Michael F. Kelly | Vehicle speed detection |
US10163038B2 (en) * | 2015-12-24 | 2018-12-25 | Sap Se | Error detection in recognition data |
CN108243623B (zh) * | 2016-09-28 | 2022-06-03 | 驭势科技(北京)有限公司 | 基于双目立体视觉的汽车防碰撞预警方法和系统 |
US10296794B2 (en) * | 2016-12-20 | 2019-05-21 | Jayant Rtti | On-demand artificial intelligence and roadway stewardship system |
JP6975929B2 (ja) * | 2017-04-18 | 2021-12-01 | パナソニックIpマネジメント株式会社 | カメラ校正方法、カメラ校正プログラム及びカメラ校正装置 |
CN107563372B (zh) * | 2017-07-20 | 2021-01-29 | 济南中维世纪科技有限公司 | 一种基于深度学习ssd框架的车牌定位方法 |
-
2019
- 2019-07-08 CN CN201910608892.6A patent/CN110285793B/zh active Active
-
2020
- 2020-06-29 US US17/422,446 patent/US11900619B2/en active Active
- 2020-06-29 WO PCT/CN2020/098769 patent/WO2021004312A1/fr active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877174A (zh) * | 2009-09-29 | 2010-11-03 | 杭州海康威视软件有限公司 | 车速测量方法、监控机及车速测量系统 |
CN103177582A (zh) * | 2013-04-22 | 2013-06-26 | 杜东 | 一种视频测速和车牌识别的一体机 |
CN103473926A (zh) * | 2013-09-11 | 2013-12-25 | 无锡加视诚智能科技有限公司 | 枪球联动道路交通参数采集及违章抓拍系统 |
JP2015064752A (ja) * | 2013-09-25 | 2015-04-09 | 株式会社東芝 | 車両監視装置、および車両監視方法 |
WO2018125939A1 (fr) * | 2016-12-30 | 2018-07-05 | DeepMap Inc. | Odométrie visuelle et alignement par paires pour la création d'une carte haute définition |
CN110285793A (zh) * | 2019-07-08 | 2019-09-27 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测轨迹方法 |
CN110322702A (zh) * | 2019-07-08 | 2019-10-11 | 中原工学院 | 一种基于双目立体视觉系统的车辆智能测速方法 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11529258B2 (en) | 2020-01-23 | 2022-12-20 | Shifamed Holdings, Llc | Adjustable flow glaucoma shunts and associated systems and methods |
CN113327192A (zh) * | 2021-05-11 | 2021-08-31 | 武汉唯理科技有限公司 | 一种通过三维测量技术测算汽车行驶速度的方法 |
CN113327192B (zh) * | 2021-05-11 | 2022-07-08 | 武汉唯理科技有限公司 | 一种通过三维测量技术测算汽车行驶速度的方法 |
CN113654509A (zh) * | 2021-07-28 | 2021-11-16 | 北京交通大学 | 轮轨接触姿态测量的检测布局控制方法及装置、介质 |
CN114155290A (zh) * | 2021-11-18 | 2022-03-08 | 合肥富煌君达高科信息技术有限公司 | 一种用于大视场高速运动测量的系统与方法 |
CN114155290B (zh) * | 2021-11-18 | 2022-09-09 | 合肥富煌君达高科信息技术有限公司 | 一种用于大视场高速运动测量的系统与方法 |
CN116953680A (zh) * | 2023-09-15 | 2023-10-27 | 成都中轨轨道设备有限公司 | 一种基于图像的目标物实时测距方法及系统 |
CN116953680B (zh) * | 2023-09-15 | 2023-11-24 | 成都中轨轨道设备有限公司 | 一种基于图像的目标物实时测距方法及系统 |
CN118378122A (zh) * | 2024-03-26 | 2024-07-23 | 安徽交控信息产业有限公司 | 一种车型识别设备的部署方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110285793B (zh) | 2020-05-15 |
CN110285793A (zh) | 2019-09-27 |
US11900619B2 (en) | 2024-02-13 |
US20220092797A1 (en) | 2022-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021004312A1 (fr) | Procédé de mesure intelligente de trajectoire de véhicule basé sur un système de vision stéréoscopique binoculaire | |
WO2021004548A1 (fr) | Procédé de mesure intelligente de vitesse de véhicule basé sur un système de vision stéréoscopique binoculaire | |
CN101894366B (zh) | 一种获取标定参数的方法、装置及一种视频监控系统 | |
Li et al. | Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space | |
EP3676796A1 (fr) | Systèmes et procédés de correction d'une carte à haute définition sur la base de la détection d'objets d'obstruction | |
US20130287290A1 (en) | Image registration of multimodal data using 3d geoarcs | |
US20180293450A1 (en) | Object detection apparatus | |
Beyeler et al. | Vision-based robust road lane detection in urban environments | |
CN109741241B (zh) | 鱼眼图像的处理方法、装置、设备和存储介质 | |
CN109544635B (zh) | 一种基于枚举试探的相机自动标定方法 | |
WO2021017211A1 (fr) | Procédé et dispositif de positionnement de véhicule utilisant la détection visuelle, et terminal monté sur un véhicule | |
WO2023056789A1 (fr) | Procédé et système d'identification d'obstacles destinés à la conduite automatique d'une machine agricole, dispositif, et support de stockage | |
CN112204614A (zh) | 来自非固定相机的视频中的运动分割 | |
Geiger et al. | Object flow: A descriptor for classifying traffic motion | |
Dornaika et al. | A new framework for stereo sensor pose through road segmentation and registration | |
Gao et al. | Complete and accurate indoor scene capturing and reconstruction using a drone and a robot | |
CN116958195A (zh) | 物件追踪整合方法及整合装置 | |
CN108090930A (zh) | 基于双目立体相机的障碍物视觉检测系统及方法 | |
Giosan et al. | Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information | |
CN116883981A (zh) | 一种车牌定位识别方法、系统、计算机设备及存储介质 | |
Fan et al. | Human-m3: A multi-view multi-modal dataset for 3d human pose estimation in outdoor scenes | |
CN116385994A (zh) | 一种三维道路线提取方法及相关设备 | |
CN109711352A (zh) | 基于几何卷积神经网络的车辆前方道路环境透视感知方法 | |
CN105894505A (zh) | 一种基于多摄像机几何约束的快速行人定位方法 | |
Xiong et al. | A 3d estimation of structural road surface based on lane-line information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20836296 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20836296 Country of ref document: EP Kind code of ref document: A1 |