CN111723778A - Vehicle distance measuring system and method based on MobileNet-SSD - Google Patents

Vehicle distance measuring system and method based on MobileNet-SSD Download PDF

Info

Publication number
CN111723778A
CN111723778A CN202010647265.6A CN202010647265A CN111723778A CN 111723778 A CN111723778 A CN 111723778A CN 202010647265 A CN202010647265 A CN 202010647265A CN 111723778 A CN111723778 A CN 111723778A
Authority
CN
China
Prior art keywords
vehicle
module
image
detection
mobilenet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010647265.6A
Other languages
Chinese (zh)
Other versions
CN111723778B (en
Inventor
郭景华
肖宝平
王靖瑶
王班
李文昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202010647265.6A priority Critical patent/CN111723778B/en
Publication of CN111723778A publication Critical patent/CN111723778A/en
Application granted granted Critical
Publication of CN111723778B publication Critical patent/CN111723778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle distance measurement system and method based on a MobileNet-SSD relate to an intelligent automobile. The system comprises a calibration module, an image acquisition module, a detection module, a judgment module, a pre-estimation module, a tracking module, a stereo matching module and a distance measurement module. The method comprises the following steps: constructing a binocular vision system, and calibrating binocular vision; the binocular camera synchronously acquires left and right eye images; detecting a target vehicle, and judging whether a first frame vehicle is detected or not; further determining coordinates of the vehicle region; carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image; and calculating the regional point parallax to obtain the regional average distance between the target object and the current vehicle. The detection process comprises HSV vehicle shadow detection and a MobileNet-SSD vehicle detection algorithm, and a vehicle tracking algorithm is combined, so that the speed and the accuracy of obtaining a target vehicle region are improved, the image identification process is simplified, the detection effect is improved, and the real-time and efficient distance measurement method is realized.

Description

Vehicle distance measuring system and method based on MobileNet-SSD
Technical Field
The invention relates to an intelligent automobile, in particular to a vehicle distance measuring system and method based on a MobileNet-SSD.
Background
The more important the vision plays in intelligent driving auxiliary system, it can remind the driver or supplementary control vehicle actively to prevent the accident, improve the security performance of vehicle. The vehicle vision distance measurement is one of important research contents of an intelligent assistant driving system, can effectively detect and track a vehicle in front and carry out distance measurement, further provides a judgment basis for a driver or the intelligent assistant driving system, and provides important guarantee for safe driving.
Chinese patent publication No. CN108108667A discloses a method for quickly measuring distance of a vehicle ahead based on narrow-baseline binocular vision, which comprises: the method comprises the steps of lane line area, lane line detection, lane line area, vehicle identification, vehicle area determination and vehicle ranging, and vehicle positioning, identification and ranging can be completed only by matching a plurality of systems, so that the calculation real-time performance is relatively poor and the accuracy is low. A set of unmanned vehicle distance measuring system based on stereoscopic vision is designed in Huang ze Yu (Huang ze, wishing for ever, field plough, etc.; modern manufacturing engineering, 2019 (09): 113 and 117.) the system only carries out improvement and optimization on a matching algorithm, and although the requirements of real-time performance and precision are met, the target characteristic detection is simple, the positioning is not accurate, and the practicability is poor.
Aiming at the existing vehicle distance measuring method, a stable, high-accuracy and good-instantaneity distance measuring method is urgently needed at present. In view of the above, the invention adopts a binocular vision distance measurement method and combines various latest technologies to fully coordinate and play the functions of the binocular vision distance measurement method and the latest technologies, thereby effectively improving the distance measurement performance of the vehicle.
Disclosure of Invention
The invention aims to overcome the difficult problems in the prior art and provide a vehicle distance measuring system based on the MobileNet-SSD, which is stable, high in accuracy and good in real-time performance.
The invention also aims to provide a method for determining and ranging a target vehicle area based on the combination of a vehicle detection and vehicle tracking algorithm and the mutual coordination of the vehicle detection and vehicle tracking algorithm and a pre-estimation algorithm, which can effectively utilize the advantages of the vehicle detection and vehicle tracking algorithm and realize an accurate and efficient vehicle ranging method based on the MobileNet-SSD.
The vehicle distance measuring system based on the MobileNet-SSD comprises a calibration module, an image acquisition module, a detection module, a first judgment module, a second judgment module, an estimation module, a tracking module, a stereo matching module and a distance measuring module; the calibration module, the image acquisition module and the detection module are sequentially connected, the output end of the detection module is connected with the first judgment module, and the output end of the first judgment module is respectively connected with the second judgment module and the tracking module; the input end of the stereo matching module is connected with the output ends of the tracking module and the pre-estimating module, and the output end of the stereo matching module is connected with the distance measuring module.
The calibration module is used for calibrating two cameras of the binocular system to obtain appropriate parameters of the binocular cameras;
the image acquisition module is used for acquiring images in front of the vehicle in real time for the binocular camera;
the detection module is used for detecting the vehicle bottom candidate area according to the HSV color characteristics, further detecting the vehicle bottom area through the MobileNet-SSD and determining the vehicle area of the first frame;
the first judging module is used for judging whether a first frame image is detected, if not, the first judging module connects a signal to the input end of the second judging module to continue vehicle detection, and if so, the first judging module outputs the signal to the tracking module to track the Deepsort target vehicle;
the second judging module is used for judging which eye detects the vehicle and selecting another eye image for vehicle area estimation;
the estimation module is used for estimating the vehicle area according to the distance information of the previous frame of image;
the tracking module is used for selecting an objective image according to the result of the first judging module to track the deep sort target vehicle to determine the final vehicle area;
the stereo matching module is used for carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image;
the output end of the stereo matching module is connected with the ranging module, and the ranging module is used for calculating the average vehicle distance in the area and obtaining the final vehicle distance.
The vehicle ranging method based on the MobileNet-SSD comprises the following steps:
step 1: constructing a binocular vision system, calibrating binocular vision, and establishing a relation between the position of a camera image pixel and the position of a scene point;
step 2: the binocular camera synchronously acquires left and right eye images;
and step 3: carrying out target vehicle detection on the left and right eye images to determine a first frame vehicle area;
and 4, step 4: judging whether a first frame vehicle is detected or not, if not, continuing vehicle detection, and if so, tracking a Deepsort target vehicle; if the left eye image and the right eye image both detect the vehicle, the left eye detection is taken as a main object, the target tracking is carried out on the left eye, and the vehicle area estimation is carried out on the right eye image;
and 5: according to the first frame vehicle coordinate area determined by the MobileNet-SSD network, further determining the coordinates of the vehicle area by a Deepsort target vehicle tracking method;
step 6: judging whether the vehicle is detected by the left eye image, if not, estimating the vehicle area of the left eye image to obtain estimated coordinates of the vehicle area of the left eye image; if so, estimating a vehicle area of the right eye image to obtain estimated coordinates of the vehicle area of the right eye image;
and 7: carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image;
and 8: and calculating the regional point parallax according to the parallax principle, and solving the regional average distance between the target object and the current vehicle.
In step 3, the specific step of detecting the target vehicle for the left eye image and the right eye image and determining the first frame vehicle region includes:
(1) converting an image to be detected collected by a binocular camera into an HSV image for each frame of RGB image, defining a shadow discrimination formula by using HSV image color characteristics for detecting vehicle bottom shadow, segmenting all vehicle bottom shadow regions by adopting a shadow detection algorithm based on the HSV color characteristics, judging pixel points to be shadow pixel points if the shadow discrimination formula is met, otherwise, judging the pixel points to be non-shadow pixel points, obtaining a preselected vehicle ROI preselected region, and determining the image with vehicles possibly existing;
(2) establishing a positive sample data set and a negative sample data set, wherein the positive sample data is a vehicle image under different conditions, such as different weather, illumination and vehicle conditions, and simultaneously marking a vehicle area and a non-vehicle area in the image; the negative sample is a traffic scene image without vehicles, such as traffic scenes including road signs, trees, buildings, billboards and the like, and the interference of the environment on target detection is eliminated.
(3) Building a MobileNet-SSD vehicle detection model, inputting the positive and negative sample data sets into the MobileNet-SSD model for training, and obtaining a trained stable MobileNet-SSD final vehicle detection model;
(4) and (3) inputting images of vehicles possibly existing into the trained MobileNet-SSD final vehicle detection model on line, eliminating non-vehicle bottom shadow regions, and detecting the real vehicle bottom shadow regions to obtain the first frame of vehicle region information.
Compared with the prior art, the invention has the advantages that:
the vehicle detection process comprises HSV vehicle shadow detection and a MobileNet-SSD vehicle detection algorithm, and a vehicle tracking algorithm is combined, so that the speed and the accuracy of obtaining the target vehicle region are improved, the determination of the target vehicle region is realized by adopting a pre-estimation algorithm on another target image by utilizing the characteristics of a binocular camera, the image identification process is simplified, the detection effect is improved, and the real-time and efficient distance measurement method is realized.
Drawings
FIG. 1 is a flow chart of a vehicle ranging method of the present invention;
FIG. 2 is a flow chart of a vehicle detection algorithm of the present invention;
FIG. 3 is a diagram of a MobileNet-SSD vehicle detection network architecture in accordance with the present invention;
FIG. 4 is a diagram of a vehicle ranging system according to the present invention.
Detailed Description
The following examples will further describe the method of the present invention in detail with reference to the accompanying drawings.
As shown in fig. 1, a flow chart of a vehicle distance measuring method of the present invention is shown, and the embodiment of the present invention includes the following steps:
step 1: and constructing a binocular vision system, calibrating binocular vision, obtaining internal parameters, external parameters and distortion coefficients, and establishing a relation between the position of a camera image pixel and the position of a scene point.
Step 2: the binocular camera synchronously collects left and right eye images.
And step 3: and carrying out target vehicle detection on the left and right eye images to determine a first frame vehicle area. FIG. 2 shows a flow chart of the vehicle detection algorithm of the present invention, which includes the following steps:
the first step is as follows: and (3) segmenting all vehicle bottom shadow areas from the image to be detected acquired by the binocular camera by adopting a shadow detection algorithm based on HSV color characteristics to obtain a vehicle ROI preselected image, so as to determine the image in which vehicles possibly exist. The method comprises the following specific steps:
(1) converting the RGB image into an HSV image for each frame;
(2) the method utilizes HSV color characteristics to detect vehicle bottom shadow, and eliminates the effect of a V component and extracts the effects of H and S components because the V component has larger influence on the shadow. The following shadow discrimination formula is defined:
Figure BDA0002573554180000041
in the formula, Hc、ScH, S component value, H, for the current treatment frame imageb、SbH, S component value of background image, Tsl、Tsh、Thl、ThhRepresents a segmentation threshold, from OtsuObtaining a dynamic threshold segmentation algorithm;
(3) and if the shadow discrimination formula is met, judging the pixel points to be shadow pixel points, otherwise, judging the pixel points to be non-shadow pixel points, and thus obtaining a preselected vehicle ROI area.
However, the actual road environment is complex and can be interfered by road signs, trees, buildings, billboards and the like, so that a large amount of false detection information exists in the ROI area of the preselected vehicle, and further detection is still needed. In view of the consideration of the SSD network on the target recognition speed and the target recognition precision, the MobileNet-SSD network combining the MobileNet network and the SSD network compresses a large amount of parameters and calculated amount, and has the advantage of higher speed, the MobileNet-SSD network is adopted for further detection.
The second step is that: establishing a positive sample data set and a negative sample data set, wherein the positive sample data set is a vehicle image under different conditions, such as different weather, illumination and vehicle conditions, and simultaneously marking a vehicle area and a non-vehicle area in the image; the negative sample data set is a traffic scene image without vehicles, such as traffic scenes containing road signs, trees, buildings, billboards and the like, and the interference of the environment on target detection is eliminated.
The third step: training a MobileNet-SSD vehicle detection network model to obtain a stable network model; the method comprises the following specific steps:
(1) as shown in fig. 3, which is a structure diagram of a MobileNet-SSD vehicle detection network of the present invention, a feature extraction network VGG16 of the SSD is replaced by a MobileNet feature extraction network without global average pooling, full connection layer and Softmax layer, and eight convolutional layers are added behind the MobileNet network to improve feature extraction capability, so as to generate feature maps of different sizes from six convolutional layers of Conv11, Conv13, Conv14_2, Conv15_2, Conv16_2 and Conv17_2 to realize multi-scale detection, and finally, the result of excessive coincidence is filtered through non-maximum suppression, so as to obtain final output;
(2) and inputting the positive and negative sample data sets into a MobileNet-SSD model for training to obtain a trained MobileNet-SSD final vehicle detection model.
The fourth step: and inputting images possibly containing vehicles into the trained MobileNet-SSD model on line, eliminating non-vehicle bottom shadow regions, detecting the real vehicle bottom shadow regions, and obtaining the first frame of vehicle region information.
And 4, step 4: and judging whether the first frame of vehicle is detected, if not, continuing to detect the vehicle, and if so, tracking the Deepsort target vehicle. If the left eye image and the right eye image both detect the vehicle, the left eye detection is taken as a main object, the target tracking is carried out on the left eye, and the vehicle area estimation is carried out on the right eye image.
And 5: according to the first frame vehicle coordinate area determined by the MobileNet-SSD, the coordinates (x) of the vehicle area are further determined by a Deepsort target vehicle tracking methodi,yi,wi,hi) And acquiring the coordinate information of the frame of the vehicle area in real time, namely displaying a rectangular frame for framing the vehicle on the image. x is the number ofi,yiThe coordinates represent the position of the upper left corner coordinates of the vehicle region relative to the image, wi,hiThe width and height of the vehicle area are shown, wherein i is l when the left-eye vehicle tracks and i is r when the right-eye vehicle tracks.
Step 6: and judging whether the vehicle is detected by the left eye.
If not, estimating the left-eye vehicle area, and setting the coordinates of the image vehicle area as (x)i,yl) The binocular cameras are on the same y plane, so yl=yrWherein x isl
Figure BDA0002573554180000051
In the formula, b is the base length, f is the camera focal length, z is the distance from the camera to the vehicle in front of the previous frame, and the target width: w is al=wrTarget height: h isl=hrObtaining the estimated vehicle region coordinates (x)l,yl,wl,hl)。
If yes, estimating the right-eye vehicle area. Let the coordinates of the image vehicle region be (x)r,yr) The binocular cameras are on the same y plane, so yr=ylWherein x isr
Figure BDA0002573554180000061
Width of the target: w is ar=wlTarget height: h isr=hlObtaining the estimated vehicle region coordinates (x)r,yr,wr,hr)。
And 7: and carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image to obtain n total vehicle region matching feature points.
And 8: calculating the regional point parallax according to the parallax principle, and solving the regional average distance between the target object and the current vehicle:
Figure BDA0002573554180000062
wherein the average parallax is:
Figure BDA0002573554180000063
FIG. 4 shows a structural diagram of the vehicle distance measuring system of the present invention, which comprises the following system frames:
the calibration module calibrates two cameras of the binocular system to obtain appropriate parameters of the binocular cameras.
The image acquisition module is used for acquiring images in front of the vehicle in real time by using a binocular camera.
The detection module detects the vehicle bottom candidate area according to the HSV color characteristics, further detects the vehicle bottom area through the MobileNet-SSD and determines the vehicle area of the first frame; the first judgment module judges whether the first frame image is detected, if not, vehicle detection is continued, if yes, deep sort target vehicle tracking is carried out, and if the left eye image and the right eye image are both detected, the left eye detection is taken as a main object, and target tracking is carried out on the left eye; the tracking module selects an objective image according to the result of the first judgment module to track the deep sort target vehicle to determine the final vehicle area; the second judgment module judges which eye detects the vehicle and selects another eye image for vehicle area estimation; the estimation module estimates the vehicle area according to the distance information of the previous frame of image.
And the stereo matching module carries out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image.
The distance measurement module calculates the average vehicle distance of the area according to the binocular vision parallax principle to obtain the final vehicle distance.
The above description is further detailed in connection with the preferred embodiments of the present invention, and it is not intended to limit the practice of the invention to these descriptions. It will be apparent to those skilled in the art that various modifications, additions, substitutions, and the like can be made without departing from the spirit of the invention.

Claims (3)

1. The vehicle distance measuring system based on the MobileNet-SSD is characterized by comprising a calibration module, an image acquisition module, a detection module, a first judgment module, a second judgment module, an estimation module, a tracking module, a stereo matching module and a distance measuring module; the calibration module, the image acquisition module and the detection module are sequentially connected, the output end of the detection module is connected with the first judgment module, and the output end of the first judgment module is respectively connected with the second judgment module and the tracking module; the input end of the stereo matching module is connected with the output ends of the tracking module and the pre-estimating module, and the output end of the stereo matching module is connected with the distance measuring module;
the calibration module is used for calibrating two cameras of the binocular system to obtain appropriate parameters of the binocular cameras;
the image acquisition module is used for acquiring images in front of the vehicle in real time for the binocular camera;
the detection module is used for detecting the vehicle bottom candidate area according to the HSV color characteristics, further detecting the vehicle bottom area through the MobileNet-SSD and determining the vehicle area of the first frame;
the first judging module is used for judging whether a first frame image is detected, if not, the first judging module connects a signal to the input end of the second judging module to continue vehicle detection, and if so, the first judging module outputs the signal to the tracking module to track the Deepsort target vehicle;
the second judging module is used for judging which eye detects the vehicle and selecting another eye image for vehicle area estimation;
the estimation module is used for estimating the vehicle area according to the distance information of the previous frame of image;
the tracking module is used for selecting an objective image according to the result of the first judging module to track the deep sort target vehicle to determine the final vehicle area;
the stereo matching module is used for carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image;
the output end of the stereo matching module is connected with the ranging module, and the ranging module is used for calculating the average vehicle distance in the area and obtaining the final vehicle distance.
2. The vehicle ranging method based on the MobileNet-SSD is characterized by comprising the following steps of:
step 1: constructing a binocular vision system, calibrating binocular vision, and establishing a relation between the position of a camera image pixel and the position of a scene point;
step 2: the binocular camera synchronously acquires left and right eye images;
and step 3: carrying out target vehicle detection on the left and right eye images to determine a first frame vehicle area;
and 4, step 4: judging whether a first frame vehicle is detected or not, if not, continuing vehicle detection, and if so, tracking a Deepsort target vehicle; if the left eye image and the right eye image both detect the vehicle, the left eye detection is taken as a main object, the target tracking is carried out on the left eye, and the vehicle area estimation is carried out on the right eye image;
and 5: according to the first frame vehicle coordinate area determined by the MobileNet-SSD network, further determining the coordinates of the vehicle area by a Deepsort target vehicle tracking method;
step 6: judging whether the vehicle is detected by the left eye image, if not, estimating the vehicle area of the left eye image to obtain estimated coordinates of the vehicle area of the left eye image; if so, estimating a vehicle area of the right eye image to obtain estimated coordinates of the vehicle area of the right eye image;
and 7: carrying out SGBM stereo matching on the vehicle region points of the left eye image and the right eye image;
and 8: and calculating the regional point parallax according to the parallax principle, and solving the regional average distance between the target object and the current vehicle.
3. The MobileNet-SSD based vehicle ranging method of claim 2, wherein in step 3, the step of detecting the target vehicle for the left and right eye images comprises the specific steps of:
(1) converting an image to be detected collected by a binocular camera into an HSV image for each frame of RGB image, defining a shadow discrimination formula by using HSV image color characteristics for detecting vehicle bottom shadow, segmenting all vehicle bottom shadow regions by adopting a shadow detection algorithm based on the HSV color characteristics, judging pixel points to be shadow pixel points if the shadow discrimination formula is met, otherwise, judging the pixel points to be non-shadow pixel points, obtaining a preselected vehicle ROI preselected region, and determining the image with vehicles possibly existing;
(2) establishing a positive sample data set and a negative sample data set, wherein the positive sample data is a vehicle image under different conditions, such as different weather, illumination and vehicle conditions, and simultaneously marking a vehicle area and a non-vehicle area in the image; the negative sample is a traffic scene image without vehicles, such as traffic scenes containing road signs, trees, buildings, billboards and the like, and the interference of the environment on target detection is eliminated;
(3) building a MobileNet-SSD vehicle detection model, inputting the positive and negative sample data sets into the MobileNet-SSD model for training, and obtaining a trained stable MobileNet-SSD final vehicle detection model;
(4) and (3) inputting images of vehicles possibly existing into the trained MobileNet-SSD final vehicle detection model on line, eliminating non-vehicle bottom shadow regions, and detecting the real vehicle bottom shadow regions to obtain the first frame of vehicle region information.
CN202010647265.6A 2020-07-07 2020-07-07 Vehicle distance measuring system and method based on MobileNet-SSD Active CN111723778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010647265.6A CN111723778B (en) 2020-07-07 2020-07-07 Vehicle distance measuring system and method based on MobileNet-SSD

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010647265.6A CN111723778B (en) 2020-07-07 2020-07-07 Vehicle distance measuring system and method based on MobileNet-SSD

Publications (2)

Publication Number Publication Date
CN111723778A true CN111723778A (en) 2020-09-29
CN111723778B CN111723778B (en) 2022-07-19

Family

ID=72573854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010647265.6A Active CN111723778B (en) 2020-07-07 2020-07-07 Vehicle distance measuring system and method based on MobileNet-SSD

Country Status (1)

Country Link
CN (1) CN111723778B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733924A (en) * 2021-01-04 2021-04-30 哈尔滨工业大学 Multi-patch component detection method
CN113343891A (en) * 2021-06-24 2021-09-03 深圳市起点人工智能科技有限公司 Detection device and detection method for child kicking quilt
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886079A (en) * 2018-12-29 2019-06-14 杭州电子科技大学 A kind of moving vehicles detection and tracking method
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium
CN109886079A (en) * 2018-12-29 2019-06-14 杭州电子科技大学 A kind of moving vehicles detection and tracking method
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
CN110322702A (en) * 2019-07-08 2019-10-11 中原工学院 A kind of Vehicular intelligent speed-measuring method based on Binocular Stereo Vision System

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
L.YANG ET AL.: "Vehicle Speed Measurement Based on Binocular Stereovision System", 《IEEE ACCESS》 *
Z.HAN ET AL.: "Design of Intelligent Road Recognition and Warning System for Vehicles Based on Binocular Vision", 《IEEE ACCESS》 *
宋子豪等: "汽车双目立体视觉的目标测距及识别研究", 《武汉理工大学学报》 *
许小伟等: "一种虚拟场景车辆检测与测距方法", 《河南科技大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733924A (en) * 2021-01-04 2021-04-30 哈尔滨工业大学 Multi-patch component detection method
CN113343891A (en) * 2021-06-24 2021-09-03 深圳市起点人工智能科技有限公司 Detection device and detection method for child kicking quilt
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium

Also Published As

Publication number Publication date
CN111723778B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN111951305B (en) Target detection and motion state estimation method based on vision and laser radar
CN110942449B (en) Vehicle detection method based on laser and vision fusion
CN107738612B (en) Automatic parking space detection and identification system based on panoramic vision auxiliary system
CN111436216B (en) Method and system for color point cloud generation
CN105711597B (en) Front locally travels context aware systems and method
CN111553252B (en) Road pedestrian automatic identification and positioning method based on deep learning and U-V parallax algorithm
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
Geiger et al. Are we ready for autonomous driving? the kitti vision benchmark suite
CN115032651B (en) Target detection method based on laser radar and machine vision fusion
CN108230392A (en) A kind of dysopia analyte detection false-alarm elimination method based on IMU
JP2009176087A (en) Vehicle environment recognizing system
CN104318561A (en) Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN113920183A (en) Monocular vision-based vehicle front obstacle distance measurement method
KR20130053980A (en) Obstacle detection method using image data fusion and apparatus
CN112991369A (en) Method for detecting overall dimension of running vehicle based on binocular vision
JP4032843B2 (en) Monitoring system and monitoring method, distance correction device and distance correction method in the monitoring system
CN114120283A (en) Method for distinguishing unknown obstacles in road scene three-dimensional semantic segmentation
CN113848545A (en) Fusion target detection and tracking method based on vision and millimeter wave radar
CN115308732A (en) Multi-target detection and tracking method integrating millimeter wave radar and depth vision
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN113487631B (en) LEGO-LOAM-based adjustable large-angle detection sensing and control method
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant