CN115131980A - Target identification system and method for intelligent automobile road driving - Google Patents

Target identification system and method for intelligent automobile road driving Download PDF

Info

Publication number
CN115131980A
CN115131980A CN202210420131.XA CN202210420131A CN115131980A CN 115131980 A CN115131980 A CN 115131980A CN 202210420131 A CN202210420131 A CN 202210420131A CN 115131980 A CN115131980 A CN 115131980A
Authority
CN
China
Prior art keywords
image
resistor
capacitor
target object
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210420131.XA
Other languages
Chinese (zh)
Inventor
吴逸飞
赵峥
高峰
鲍晨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bestar Holding Co ltd
Original Assignee
Bestar Holding Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bestar Holding Co ltd filed Critical Bestar Holding Co ltd
Priority to CN202210420131.XA priority Critical patent/CN115131980A/en
Publication of CN115131980A publication Critical patent/CN115131980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a target recognition system and a method for intelligent automobile road driving, belonging to the field of intelligent automobiles.A sensor unit consisting of an ultrasonic sensor, an infrared sensor and a camera is adopted to carry out image acquisition, the ultrasonic sensor is utilized to position a target object, and meanwhile, the weather environment condition is considered, data calibration is carried out, and the accuracy of distance is ensured; simultaneously, acquiring image information of a target object by utilizing an infrared sensor and a camera combination, processing the acquired image and the data of reflected ultrasonic waves by utilizing an integrated computing processing unit, and extracting characteristic information of the target object; finally, the fusion processing unit is used for fusing the acquired images, so that the accuracy and the stability of the information are ensured; therefore, the method and the device can accurately identify the target object information under the condition of severe weather or insufficient illuminance, and ensure the driving safety of the user.

Description

Target identification system and method for intelligent automobile road driving
Technical Field
The invention relates to a target identification system and a target identification method for intelligent automobile road driving, and belongs to the field of intelligent automobiles.
Background
The automobile is an indispensable part in daily life of people, and is a mass transportation system or a personal automobile regardless of walking or going out; the vehicle industry has been representative of the traditional industry and manufacturing industry over the past years, and the industry with many components has been developed vigorously. With the gradual penetration of vehicles into the lives of the public and the gradual complexity of the functionality of the vehicles, many electronic components that have not been used before appear on the vehicles nowadays, so that luxury and comfort are no longer the only choices of people for vehicles, and the safety guarantee of driving is also gradually valued by the public.
The intelligent automobile is a comprehensive system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like, intensively applies technologies such as computer, modern sensing, information fusion, communication, artificial intelligence, automatic control and the like, and is a typical high and new technology complex. At present, research on intelligent automobiles mainly aims to improve the safety and the comfort of the automobiles and provide excellent human-vehicle interaction interfaces.
Smart cars offer many functions that have not been possible in the past, such as: an active safety system, an automatic driving system, a real-time traffic flow navigation information system and the like; the vehicle is different from a traditional vehicle, integrates technologies such as an automobile, a semiconductor, electronics, the Internet of things, information communication and photoelectricity, and provides functions required by people through different devices such as sensors of the Internet of things, radars and wireless communication. In the past, the knowledge of vehicle safety is realized through a plurality of technologies to improve the safety of the vehicle; future technologies will further act before an accident may occur, and thus, it is very important to establish standards related to safety protection. As many embedded technologies develop and are used, the design and analysis of automotive safety systems are continually improving. Many new technologies, such as biometric technology, image processing technology, communication technology, etc., have been integrated into automotive security systems. At the same time, the number of car accidents is still high, especially lost. Thus, a practical automotive safety system should be efficient, robust, and reliable; in the prior art, when the intelligent automobile runs or parks, the intelligent identification detection of a target object is required, but the collection work in the prior art mostly adopts a camera to collect, but in a scene with bad weather or insufficient illuminance, the collected image can be very blurred. Therefore, the system cannot identify the target object, which may affect the driving safety of the user.
Disclosure of Invention
The purpose of the invention is as follows: the system and the method for recognizing the target of the intelligent automobile running on the road are provided, and the problems that in the prior art, when the intelligent automobile runs or is parked, the intelligent recognition detection of the target needs to be carried out, but the collection work in the prior art mostly adopts a camera for collection, but in a severe weather or a scene with insufficient illuminance, the collected image is very blurred are solved. Therefore, the system cannot identify the target object, which may affect the driving safety of the user.
The technical scheme is as follows: in a first aspect, a target recognition system and method for intelligent vehicle road driving includes: step 1, firstly, an ultrasonic sensor in a sensor unit determines the approximate position and distance of a target object, and the approximate position and distance are processed by an integrated computing and processing unit and sent to a control unit;
step 2, detecting the specific information position of the target object through an infrared sensor and a camera in the sensor unit, processing the specific information position through the fusion processing unit and sending the specific information position to the control unit;
and 3, the control unit outputs signals to the display unit according to the fusion processing unit for reference of a client, and meanwhile, real-time tracking of the target object is carried out.
In a further embodiment, in step 1 of the method, the integrated computing processing unit needs to perform algorithm correction by the integrated computing processing unit for the deficiency of the basic calibration and the actual measurement error at the same time, so as to obtain more accurate ranging information;
firstly, before ranging, data calibration is needed according to external temperature information and measurement time information in echo data, and temperature influence and ranging system errors can be reduced through calibration; the ambient temperature affects the ultrasonic velocity and thus the actual obstacle distance identification, so the actual temperature value needs to be calculated through the temperature error:
E 1 =E-α*E 2 -b*E 3 -c*E 4
wherein E represents a temperature value in the echo data, E 1 Representing the current ambient actual temperature value, E 2 Indicating the required temperature of the system setting, E 3 Indicating the vehicle outside temperature, E 4 Indicating the temperature in the vicinity of the current system; the current environment temperature is derived from the fusion of the temperature set by different systems and the external temperature of the vehicle, the system sends a command to acquire a temperature value through a wave-sending gap, the real-time temperature of the ultrasonic sensor of the system is set to be actively acquired every 1s, the external temperature of the vehicle CAN be acquired through a corresponding CAN bus, the system temperature of the bypass is taken as a reference, the external temperature of the vehicle and the system temperature of the bypass are corrected to acquire an actual environment temperature value, a correction coefficient is calibrated according to the actual environment temperature value, and finally a temperature value signal is output through the filtering of a sliding window;
secondly, although the speed of the ultrasonic wave is fast, a certain time error is generated during the propagation process, so that the calibration time is required to obtain accurate echo time, and the time alignment calibration is performed on each transmission channel by adding a timestamp with higher precision in the process of transmitting and receiving the wave, and the calibration method is as follows:
t 1 =t+(t 2 -t 3 )
in the formula, t 1 Representing the actual time of the echo, t 2 A wave-emitting time stamp representing a wave-emitting channel; t is t 3 A timestamp representing a current wave-receiving channel;
thus, the actual distance value is obtained according to the calculated temperature error and the time error, namely:
Figure BDA0003606508480000021
L fruit of Chinese wolfberry =v Fruit of Chinese wolfberry *t 1 /2=N*v Fruit of Chinese wolfberry /2f
L Fruit of Chinese wolfberry Representing the actual distance, v, of the target object from the car Fruit of Chinese wolfberry Representing the actual transmission speed of the ultrasonic wave; the actual distance value signal is transmitted to the control unit.
In a further embodiment, in step 2, the infrared sensor and the camera in the sensor unit perform specific information position detection on the target object, and the specific steps are as follows:
step 2-1, processing images of information collected by the infrared sensor and the camera;
step 2-2, positioning a target object in the image and extracting characteristics;
and 2-3, carrying out image fusion according to the proposed features and outputting an image information value control unit.
In a further embodiment, the image processing work is to perform image gray scale conversion, image denoising and image enhancement work on the images collected by the infrared sensor and the camera, so as to remove the internal noise of the images and enhance the information contrast; the specific process is as follows:
firstly, carrying out gray level conversion;
secondly, carrying out grayscale image denoising treatment;
thirdly, image enhancement is carried out;
in a further embodiment, the output infrared gray image and the visible light gray image are subjected to target object detection, positioning and feature extraction in the image, so as to extract specific features related to the target object; the method comprises the following specific steps:
firstly, positioning an image target;
secondly, extracting the characteristics of the target object;
in a further embodiment, the output infrared image features and the output visible light image features are fused, and the huge difference in the imaging principle causes that the information type difference contained in the infrared image and the visible light image is also huge, so that the fused image has more various information types compared with a single-mode image, the fused image is beneficial to improving the stability and the accuracy of the target detection and identification process, and the adaptability of an automobile auxiliary driving system is greatly improved; the specific method comprises the following steps:
carrying out weighted average on gray values corresponding to pixels of the original image according to a set weight value, thereby obtaining a fused image:
F(x,y)=α*A(x,y)+β*B(x,y)
f (x, y) represents the fused image, a (x, y) represents an infrared image, B (x, y) represents a visible light image, α, β represent weight coefficients of infrared and visible light, respectively, and α + β is 1;
in a second aspect, an object recognition system for intelligent vehicle road driving includes:
a sensor unit for detecting a target object;
the integrated computing processing unit is used for processing the data of the collected images and the reflected ultrasonic waves and extracting the characteristic information of the target object;
the fusion processing unit is used for fusing the acquired images so as to ensure the accuracy and stability of the information;
the control unit is used for carrying out transmission communication of information;
and the display unit is used for presenting the target object image.
In a further embodiment, the sensor unit consists of an ultrasonic sensor, an infrared sensor and a camera.
In a further embodiment, a filtering module is provided inside the sensor unit, the filtering module comprising: the resistor R1, the resistor R2, the resistor R3, the resistor R4, the resistor R5, the resistor R6, the resistor R7, the capacitor C1, the capacitor C2, the capacitor C3, the capacitor C4, the capacitor C5, the diode D1, the diode D2, the triode Q1, the MOS transistor Q2 and the rheostat RV 1;
one end of the capacitor C1 is connected to one end of the resistor R3 and receives a signal, the other end of the capacitor C1 is connected to one end of the resistor R1 and the base of the transistor Q1, the collector of the transistor Q1 is connected to the other end of the resistor R1, one end of the capacitor C2 and one end of the resistor R2, the transmitter of the transistor Q1 is connected to one end of the resistor R7, the other end of the capacitor C2 is connected to the anode of the diode D1 and the cathode of the diode D2, the anode of the diode D2 is connected to one end of the capacitor C3, one end of the resistor R4, one end of the varistor RV1 and the gate of the MOS transistor Q2, the source of the MOS transistor Q2 is connected to one end of the capacitor C4, the drain of the MOS transistor Q2 is connected to one end of the capacitor C5, one end of the resistor R5 and one end of the resistor R6, the other end and the control end of the rheostat RV1 are connected with the other end of the resistor R2 and the other end of the resistor R5 at the same time and input working voltage, the other end of the capacitor C4 is connected with the other end of the resistor R3 and one end of the capacitor C6 at the same time, the other end of the resistor R6 is connected with the other end of the capacitor C5, the other end of the resistor R4, the other end of the capacitor C3, the negative electrode of the diode D1 and the other end of the resistor R7 at the same time and grounded, and the other end of the capacitor C6 outputs signals.
In a further embodiment, the resistor R3, the capacitor C4 and the MOS transistor Q2 form a filter circuit, when an input signal is weak, a voltage output by a rectifier circuit formed by the diode D1, the diode D2 and the capacitor C3 is small, the MOS transistor Q2 is turned on and deepened, high-frequency noise in the input signal is filtered by the resistor R3 and the capacitor C4, when the input signal is strong, a negative voltage rectified and output by the diode D1, the diode D2 and the capacitor C3 is increased, the MOS transistor Q2 is turned on and off, attenuation effects of the resistor R3 and the capacitor C4 on the high-frequency signal are reduced or not attenuated, and the varistor RV1 controls a noise suppression action level.
Has the advantages that: the invention relates to a target recognition system and a method for intelligent automobile road driving, belonging to the field of intelligent automobiles.A sensor unit consisting of an ultrasonic sensor, an infrared sensor and a camera is adopted to carry out image acquisition, the ultrasonic sensor is utilized to position a target object, and meanwhile, the weather environment condition is considered, data calibration is carried out, and the accuracy of distance is ensured; simultaneously, acquiring image information of a target object by utilizing an infrared sensor and a camera combination, processing the acquired image and the data of reflected ultrasonic waves by utilizing an integrated computing processing unit, and extracting characteristic information of the target object; finally, the fusion processing unit is used for fusing the acquired images, so that the accuracy and the stability of the information are ensured; therefore, the method and the device can accurately identify the target object information under the condition of severe weather or insufficient illuminance, and ensure the driving safety of the user.
Drawings
FIG. 1 is a schematic of the process of the present invention.
Fig. 2 is a schematic view of ultrasonic ranging of the present invention.
FIG. 3 is a schematic diagram of the image processing of the target object acquisition of the present invention.
FIG. 4 is a flowchart of the grayscale image denoising process of the present invention.
Fig. 5 is a schematic diagram of the system of the present invention.
Fig. 6 is a schematic diagram of a filtering module of the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these details; in other instances, well-known features have not been described in order to avoid obscuring the invention.
As shown in fig. 1 to 4, a target recognition method for intelligent road driving of an automobile includes:
step 1, firstly, an ultrasonic sensor in a sensor unit determines the approximate position and distance of a target object, and the approximate position and distance are processed by an integrated computing and processing unit and sent to a control unit;
step 2, detecting the specific information position of the target object through an infrared sensor and a camera in the sensor unit, processing the information position through the fusion processing unit and sending the information position to the control unit;
and 3, the control unit outputs signals to the display unit according to the fusion processing unit for reference of a client, and meanwhile, real-time tracking of the target object is carried out.
In the method step 1, when target identification is required, the control unit sends an identification task to an identification system in the automobile, so that an ultrasonic sensor of a sensor unit in the identification system works; the method comprises the following specific steps:
firstly, an ultrasonic transmitting device transmits ultrasonic waves to a certain direction, timing is started, the ultrasonic waves return to an ultrasonic receiving device to form a reflected wave after contacting with an obstacle, and then an ultrasonic sensor generates an analog signal which is transmitted to an integrated computing processing unit;
secondly, the integrated calculation processing unit receives signals, calculates the distance of a target object, detects the time of the ultrasonic wave emitted from the ultrasonic transmitter propagating to the receiver through the gas medium, namely the round-trip time, the round-trip time is multiplied by the sound velocity in the gas medium, namely the distance of sound wave transmission, and the measured distance is half of the distance of sound wave transmission, namely:
L=v*t/2
where L represents the distance of the target object from the automobile, v represents the speed of sound waves emitted from the ultrasonic sensor (the speed varies depending on the ultrasonic sensor), and t represents the time between the emission and return of the ultrasonic waves; because when ultrasonic distance detection, can have the range finding error, want to guarantee that the range finding is accurate, need utilize pulse technology method to carry out the range finding, turn into the measurement to the count pulse number ultrasonic wave round trip time, promptly:
t=v/f
L=N*v/2f
in the formula, N represents the number of counting pulses, and f is the frequency of the counting pulses; thereby obtaining the distance between the target object and the vehicle and transmitting the signal to the integrated computing and processing unit;
the integrated calculation processing unit needs to perform algorithm correction aiming at the deficiency of basic calibration and the actual measurement error through the integrated calculation processing unit at the same time, so that more accurate ranging information is obtained;
firstly, before ranging, data calibration is needed according to external temperature information and measurement time information in echo data, and temperature influence and ranging system errors can be reduced through calibration; the ambient temperature affects the ultrasonic velocity and thus the actual obstacle distance identification, so the actual temperature value needs to be calculated through temperature error:
E 1 =E-a*E 2 -b*E 3 -c*E 4
wherein E represents a temperature value in the echo data, E 1 Representing the current ambient actual temperature value, E 2 Indicating the required temperature of the system setting, E 3 Indicating the vehicle outside temperature, E 4 Indicating the temperature in the vicinity of the current system; the current environment temperature is derived from the fusion of the temperature set by different systems and the external temperature of the vehicle, the system sends a command to acquire a temperature value through a wave-sending gap, the real-time temperature of the ultrasonic sensor of the system is set to be actively acquired every 1s, the external temperature of the vehicle CAN be acquired through a corresponding CAN bus, the system temperature of the bypass is taken as a reference, the external temperature of the vehicle and the system temperature of the bypass are corrected to acquire an actual environment temperature value, a correction coefficient is calibrated according to the actual environment temperature value, and finally a temperature value signal is output through the filtering of a sliding window;
secondly, although the speed of the ultrasonic wave is fast, a certain time error is generated during the propagation process, so that the calibration time is required to obtain accurate echo time, and the time alignment calibration is performed on each transmission channel by adding a timestamp with higher precision in the process of transmitting and receiving the wave, and the calibration method is as follows:
t 1 =t+(t 2 -t 3 )
in the formula, t 1 Representing the actual time of the echo, t 2 A wave-emitting time stamp representing a wave-emitting channel; t is t 3 A timestamp representing a current wave-receiving channel; thus, the actual distance value is obtained according to the calculated temperature error and the time error, namely:
Figure BDA0003606508480000071
L fruit of Chinese wolfberry =v Fruit of Chinese wolfberry *t 1 /2=N*v Fruit of Chinese wolfberry /2f
L Fruit of Chinese wolfberry Representing the actual distance, v, of the target object from the car Fruit of Chinese wolfberry Representing the actual transmission speed of the ultrasonic wave; the actual distance value signal is transmitted to the control unit.
In step 2, the infrared sensor and the camera in the sensor unit perform specific information position detection on the target object, and the specific steps are as follows:
step 2-1, processing images of information collected by the infrared sensor and the camera;
step 2-2, positioning a target object in the image and extracting characteristics;
and 2-3, carrying out image fusion according to the proposed features and outputting an image information value control unit.
Example 1:
in the method, image processing work is image gray level conversion, image denoising and image enhancement work on images collected by an infrared sensor and a camera, so that internal noise of the images is removed and information contrast is enhanced; the specific process is as follows:
firstly, carrying out gray level conversion;
dividing the pixel points of the input infrared image and the input visible light image into R, G, B values to represent, so that one pixel point matrix corresponds to three color vector matrixes which are an R matrix, a G matrix and a B matrix respectively; the maximum value of each matrix is 255, the minimum value is 0, each pixel point in the image has an RGB value, and gray processing is firstly carried out on each pixel point, namely:
R 1 =G 1 =B 1 =(R+G+B)/3
in the formula, R 1 、G 1 、B 1 Expressing the pixel value after the gray processing;
secondly, selecting a gray threshold or calculating the average value of the gray values of all the pixel points in the matrix as a reference target, and sequentially comparing the gray values with the gray values of all the pixel points in the matrix according to the size of the selected value, wherein the pixel points which are greater than the selected value are marked as 255, and the pixel points which are less than or equal to the selected value are marked as 0; and further completing the binarization processing of the image, and highlighting the image details.
Secondly, carrying out grayscale image denoising treatment;
the infrared image and the visible light image are inevitably lost in the process of acquisition and transmission and are easily polluted by noise, so that denoising work is needed, the noise is the most frequently-occurring interference in the image, the types of the noise are more, and the noise has extreme values, so that noise points are difficult to separate from the original image; if the pixel value of a pixel point is not in the range where the target information may appear, it is likely to be a noise point, but if it is directly filtered, it will cause a blur of the filtered image, and therefore a more accurate analysis is required, and its specific steps are as follows:
step S1, first, a convolution kernel is selected, the size of the convolution kernel must be odd, and the pixel value of the image is made to be J 1 、J 2 、J 3 ,…,J n The order of the convolution kernel is set as m × m; traversing the image by using the convolution kernel, rearranging the pixel points in the convolution kernel from small to large when the convolution kernel moves once, wherein the number positioned in the middle of the array is the median of the array, and the value of the pixel point is used as the pixel value of the central pixel point of the convolution kernel;
step S2, judging whether the pixel value of the center point of the convolution kernel is the maximum value or the minimum value in the neighborhood, if so, indicating that the point is a noise point, and entering the step S5; if not, go to step S3;
step S3, judging whether the pixel value of the point is the secondary extreme value, if so, indicating that the point is a noise point, and entering step S5; if not, go to step S4;
step S4, judging whether the absolute value of the difference between the pixel value of the point and the secondary extreme value is smaller than a set threshold value, if so, indicating that the point is a noise point, and entering step S5; otherwise, the point is not a noise point and is not processed;
step S5, for the determined noise points, firstly, judging the window with the size of 3 x 3, judging the number of non-noise points, if the number of the non-noise points is more than or equal to 3, assigning the median value of the non-noise points to the central pixel point of the convolution kernel, if the number of the non-noise points is less than 3, enlarging the window size to 5 x 5, scanning the non-noise points again, judging the number of the non-noise points, if the number of the non-noise points is more than or equal to 9, assigning the median value of the non-noise points to the central pixel point of the convolution kernel, and if the number of the non-noise points is less than 9, continuing enlarging the window size to 7 x 7; at the moment, in order to improve the operation efficiency of the algorithm, whether the number of the non-noise points is 0 or not is directly judged; if the pixel value is 0, all the points in the window are noise points, the mean value of the pixel values of the four points in the neighborhood of the central pixel point of the convolution kernel is assigned to the central pixel point of the convolution kernel, and if the pixel value is not 0, the pixel value is not noise;
step S6, removing the determined noise points, and extracting m × m numbers J from the pixel value array of the image i-k 、…J i-1 、J i 、J i+1 、…、J i+k (ii) a Rearranging m values by m values, wherein the output result after filtering is the middle value after sorting; namely: y is i =med(J i-k 、…、J i-1 、J i 、J i+1 、…、J i+k ) Wherein k is m-1/2; therefore, the image is filtered, and the noise in the image is removed.
Thirdly, image enhancement is carried out;
because the contrast of the denoised infrared image and the denoised visible light image is low, particularly when the distance between the target object and the sensor is long or the temperature of the target object is close to the ambient temperature, the edge details of the target object and the background in the image are not very obvious in distinction, so that the following detection and positioning operations are difficult, the image needs to be enhanced to adjust the distribution of image pixels, and the contrast of the image is improved; the method comprises the following steps:
firstly, images collected by an infrared sensor and a camera are input into an integrated computing processing unit, further, collected images are subjected to color gradation conversion firstly, the color gradation conversion is used for processing the color gradation value of an image pixel, noise information is suppressed, and useful information is highlighted; wherein the mathematical expression of the piecewise linear tone scale transformation is:
Figure BDA0003606508480000091
i=(x,y)
in the formula, p represents the pixel value of the image after transformation, i represents the pixel value of the source image, and (a, b) and (c, d) represent that an x-axis gray scale interval and a y-axis gray scale interval which meet requirements are set, and the x-axis gray scale interval and y-axis U and V represent the maximum values of the x-axis gray scale and the y-axis gray scale, so that any interval is expanded or compressed by adjusting the position of a broken line inflection point and controlling the slope of a segmented straight line; after the piecewise linear color gradation conversion, the contrast ratio of the infrared night vision image and the visible light image is enhanced, and the target is easier to identify.
Example 2:
in the method, the detection, the positioning and the feature extraction of the target object in the image are carried out on the output infrared gray image and the output visible light gray image, so that the specific features of the target object are extracted; the method comprises the following specific steps:
firstly, positioning an image target;
for two input gray level images, the images contain target object information and other information, available information needs to be extracted, and the two other information is used as a background, so that the saliency of the target object information is protected; the method comprises the following specific steps:
firstly, processing an infrared image and visible light to be detected by utilizing a top hat conversion algorithm, enhancing target information in an original image, suppressing background noise, and possibly reserving some bright spots which possibly contain partial noise spots because the areas of the bright spots are smaller than that of a detection structure; the image is segmented by utilizing threshold segmentation, false target points cannot be eliminated, and the false alarm rate is increased;
secondly, filtering the original image by utilizing two-dimensional discrete wavelet transform and a proper coefficient, so that the original image receives less noise interference; during this period, due to the influence of the equipment, background noise and the like, background points in the original image cannot be completely removed;
and finally, combining the operated images, processing the images by adopting an image processing method which simultaneously considers time domain characteristics and frequency domain characteristics, greatly reducing the probability of the occurrence of false alarm rate and simultaneously improving the target detection rate after the processing by the method, and outputting the target object information.
Secondly, extracting the characteristics of the target object;
extracting the characteristics of the target object according to the positioned information of the target object, wherein the implementation method is to use a rectangular frame to describe the characteristics of the target object in the image, and the calculation of a Haar-like characteristic value and the idea of an integral graph are often combined in actual use so as to improve the operation efficiency of the characteristic value;
selecting a point pixel in the positioning image, wherein the value of the pixel (x, y) is the sum of pixel values of pixel points in a second quadrant in a coordinate system taking the pixel point as an origin, and the sum of pixel values of rectangular areas with any size in the coordinate system can be rapidly obtained as long as the integral of the original image is known, namely:
G(x,y)=G(x-1,y)+S(x,y)
S(x,y)=G(x,y-1)+I(x,y)
Figure BDA0003606508480000101
wherein, I (x, y) represents the pixel value of the image at (I, j), G (x, y) represents the pixel value of the integral image at (I, j), and s (x, y) represents the weight of the pixel (x, y) and all original pixels in the longitudinal direction thereof; under the condition that the integral of the original image is known, the sum of pixel values in a rectangular area with any size in the original image can be rapidly obtained, so that the characteristics of the target object can be obtained, and the infrared image characteristics and the visible light image characteristics are subjected to output value next step.
Example 3:
in the method, the output infrared image characteristics and the output visible light image characteristics are fused, and the huge difference in the imaging principle causes that the information type difference contained in the infrared image and the visible light image is also huge, so that the fused image has more various information types compared with a single-mode image, the fused image is beneficial to improving the stability and the accuracy of the target detection and identification process, and the adaptability of an automobile auxiliary driving system is greatly improved; the specific method comprises the following steps:
carrying out weighted average on gray values corresponding to pixels of an original image according to a set weight value to obtain a fused image, wherein F (x, y) is alpha A (x, y) + beta B (x, y), F (x, y) represents the fused image, A (x, y) represents an infrared image, B (x, y) represents a visible light image, alpha and beta represent weight coefficients of infrared and visible light respectively, and alpha + beta is 1; meanwhile, the fusion of the infrared image and the visible light image is the information fusion detected by the infrared sensor and the camera, but because the image information detected by the infrared sensor and the camera is influenced by the driving speed of the automobile in the driving process of the automobile, if the speed is different, the acquired images of the target and the background are different; therefore, different weighted value coefficients need to be set according to the vehicle speed when the image is acquired: namely:
when the vehicle speed is less than 60km/h, the collected images are mainly collected by an infrared sensor, namely the ratio of the infrared image fusion weighting coefficient to the visible light image fusion weighting coefficient is 1: 0, because the vehicle speed is low, the driver needs to pay attention to more target information in front of the road surface, such as: road pits ahead, wood on the road surface, goods falling off from vehicles ahead, etc.
When the vehicle speed is 60-90 km/h, the ratio of the infrared image fusion weighting coefficient to the visible light image fusion weighting coefficient is 1: 1, the speed section is the speed section with the highest accident occurrence rate, and the advantages of the infrared imaging technology and the visible light imaging technology are comprehensively utilized, so that the visual field of a driver can be increased, and the driving accident can be avoided.
When the vehicle speed is more than 90km/h, the ratio of the infrared image fusion weighting coefficient to the visible light image fusion weighting coefficient is 0: 1, the visible light sensing image acquisition is mainly used, because the increase of the driving speed requires the detection distance of the vehicle-mounted infrared sensor to be increased.
In step 3, the control unit outputs signals to the display unit according to the fusion processing unit for reference of a client, simultaneously carries out real-time tracking on the target object, when the specific position of the target object is determined, the information and the position of the image need to be updated in real time when the automobile is in a driving state, so that the tracking work of the image needs to be carried out, the gray histogram of the target in the infrared image is not influenced by the form of the target, therefore, the histogram can be regarded as a mode of the target, the stability of the result of target matching through color distribution is strong, the center of the target is expressed as a rectangle with the center of w and the width of u, the pixel position is x, the pixel value is between 1 and N-1, the influence of the obstacle shielding on some pixel points around the image target is uncertain, therefore, different weights need to be respectively given to the pixels at each position in the target, the closer the pixel in the target is to the target, the higher the weight value of the pixel is; the probability density of the infrared image target histogram is expressed by a formula:
Figure BDA0003606508480000111
Figure BDA0003606508480000112
h is a kernel function used for weighting pixels, l represents the number of pixel points in the target window,
Figure BDA0003606508480000113
used for judging whether the color value and the characteristic value of the pixel point in the target range are the same or not, C t If the current image is in the range of the alternative infrared image taking the image space point as the center, calculating the probability density of the alternative target histogram in the tracking window:
Figure BDA0003606508480000121
in the formula, μ is used for judging whether the color value and the characteristic value of the pixel point outside the target range are the same, and calculating the similarity of the probability distribution of the target histogram and the candidate target histogram:
Figure BDA0003606508480000122
in the formula, o is the number of elements in the feature space, and e represents a feature value, so as to calculate the weight of each pixel point in the current image, which belongs to the target:
Figure BDA0003606508480000123
in the above, X 1 Representing the weight of each pixel point in the current image belonging to the target, and continuously calculating the new position of the target to be selected in the next frame of infrared image:
Figure BDA0003606508480000124
in the formula, e represents the kernel density estimation, when the calculation result is a fixed Papanicolaou coefficient, the target tracking is finished, otherwise, the center w of the tracked target is replaced, and the operation is repeated to find the target position meeting the requirement; until the target tracking is finished.
As shown in fig. 5 to 6, an object recognition system for intelligent road driving of an automobile includes:
a sensor unit for detecting a target object;
the integrated computing and processing unit is used for processing the data of the collected image and the reflected ultrasonic wave and extracting the characteristic information of the target object;
the fusion processing unit is used for fusing the acquired images so as to ensure the accuracy and stability of the information;
the control unit is used for carrying out transmission communication of information;
and the display unit is used for presenting the target object image.
In one embodiment, the sensor unit is composed of an ultrasonic sensor, an infrared sensor and a camera.
In one embodiment, the sensor unit is internally provided with a filtering module, the filtering module comprising: the resistor R1, the resistor R2, the resistor R3, the resistor R4, the resistor R5, the resistor R6, the resistor R7, the capacitor C1, the capacitor C2, the capacitor C3, the capacitor C4, the capacitor C5, the diode D1, the diode D2, the triode Q1, the MOS transistor Q2 and the rheostat RV 1;
one end of the capacitor C1 is connected to one end of the resistor R3 and receives a signal, the other end of the capacitor C1 is connected to one end of the resistor R1 and the base of the transistor Q1, the collector of the transistor Q1 is connected to the other end of the resistor R1, one end of the capacitor C2 and one end of the resistor R2, the transmitter of the transistor Q1 is connected to one end of the resistor R7, the other end of the capacitor C2 is connected to the anode of the diode D1 and the cathode of the diode D2, the anode of the diode D2 is connected to one end of the capacitor C3, one end of the resistor R4, one end of the varistor RV1 and the gate of the MOS transistor Q2, the source of the MOS transistor Q2 is connected to one end of the capacitor C4, the drain of the MOS transistor Q2 is connected to one end of the capacitor C5, one end of the resistor R5 and one end of the resistor R6, the other end of the rheostat RV1, a control end are connected with the other end of the resistor R2 and the other end of the resistor R5 simultaneously and input working voltage, the other end of the capacitor C4 is connected with the other end of the resistor R3 and one end of the capacitor C6 simultaneously, the other end of the resistor R6 is connected with the other end of the capacitor C5, the other end of the resistor R4, the other end of the capacitor C3, the cathode of the diode D1 and the other end of the resistor R7 simultaneously and grounded, and the other end of the capacitor C6 outputs signals.
In one embodiment, the resistor R3, the capacitor C4 and the MOS transistor Q2 form a filter circuit, when an input signal is weak, a voltage output by a rectifier circuit formed by the diode D1, the diode D2 and the capacitor C3 is small, the MOS transistor Q2 is turned on and deepened, high-frequency noise in the input signal is filtered by the resistor R3 and the capacitor C4, when the input signal is strong, a negative voltage rectified and output by the diode D1, the diode D2 and the capacitor C3 is increased, the MOS transistor Q2 is turned on and off, attenuation effects of the resistor R3 and the capacitor C4 on the high-frequency signal are reduced or not attenuated, and the varistor RV1 controls a noise suppression action level.
In one embodiment, the ultrasonic sensor breaks through the technical bottleneck of long-distance sensing based on the multilayer piezoelectric ceramic transceiving units developed independently, and realizes target detection under working conditions of self-adaptive car following, garage searching, parking and the like; the independently developed infrared sensor is based on temperature imaging and is used for solving the problem that a visible light sensor cannot identify under the low-illumination condition; the newly developed visual algorithm is based on a high-resolution camera and is mainly used for identifying targets such as lane lines, vehicles, pedestrians, traffic signs and the like; the fusion module performs fusion and decision based on the recognition results of the sensors, and provides reliable information support for the control execution of the intelligent automobile.
The ultrasonic sensor is based on a pure silver low-temperature co-firing technology, multilayer piezoelectric ceramics are developed to replace the traditional single-layer technology, the emission energy is improved by more than one time, the detection distance is increased from 2-5 meters to 10 meters, and the cost is reduced by more than 60%.
The infrared sensor develops an SMD (surface mounted device) infrared array sensor with 16 x 16 units, adopts a transistor packaging technology of an integrated lens, obviously reduces the size, improves the space efficiency by 50 percent, realizes high-definition infrared imaging in a low-illumination environment, and is in the leading level of the industry.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, however, the present invention is not limited to the specific details of the embodiments, and various equivalent changes can be made to the technical solution of the present invention within the technical idea of the present invention, and these equivalent changes are within the protection scope of the present invention.

Claims (10)

1. A target identification method for intelligent automobile road driving is characterized by comprising the following steps:
step 1, firstly, an ultrasonic sensor in a sensor unit determines the approximate position and distance of a target object, and the approximate position and distance are processed by an integrated computing processing unit and sent to a control unit;
step 2, detecting the specific information position of the target object through an infrared sensor and a camera in the sensor unit, processing the information position through the fusion processing unit and sending the information position to the control unit;
and 3, the control unit outputs signals to the display unit according to the fusion processing unit for reference of a client, and meanwhile, real-time tracking of the target object is carried out.
2. The method for identifying the target of the intelligent automobile road driving according to claim 1,
in the method step 1, the integrated computing processing unit needs to perform algorithm correction by the integrated computing processing unit aiming at the deficiency of basic calibration and the actual measurement error at the same time, so as to obtain more accurate ranging information;
firstly, before ranging, data calibration is needed according to external temperature information and measurement time information in echo data, and temperature influence and ranging system errors can be reduced through calibration; the ambient temperature affects the ultrasonic velocity and thus the actual obstacle distance identification, so the actual temperature value needs to be calculated through the temperature error:
Figure RE-DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure RE-261151DEST_PATH_IMAGE002
represents the temperature values in the echo data,
Figure RE-DEST_PATH_IMAGE003
represents the current ambient actual temperature value and,
Figure RE-115974DEST_PATH_IMAGE004
indicating the required temperature set by the system,
Figure RE-DEST_PATH_IMAGE005
which is indicative of the temperature outside the vehicle,
Figure RE-833394DEST_PATH_IMAGE006
indicating the temperature in the vicinity of the current system; the current environment temperature is derived from the fusion of the temperature set by different systems and the external temperature of the vehicle, the system sends a command to acquire a temperature value through a wave-sending gap, the real-time temperature of the ultrasonic sensor of the system is set to be actively acquired every 1s, the external temperature of the vehicle CAN be acquired through a corresponding CAN bus, the system temperature of the bypass is taken as a reference, the external temperature of the vehicle and the system temperature of the bypass are corrected to acquire an actual environment temperature value, a correction coefficient is calibrated according to the actual environment temperature value, and finally a temperature value signal is output through the filtering of a sliding window;
secondly, although the speed of the ultrasonic wave is fast, a certain time error is generated during the propagation process, so that the calibration time is required to obtain accurate echo time, and the time alignment calibration is performed on each transmission channel by adding a timestamp with higher precision in the process of transmitting and receiving the wave, and the calibration method is as follows:
Figure RE-DEST_PATH_IMAGE007
in the formula (I), the compound is shown in the specification,
Figure RE-166287DEST_PATH_IMAGE008
which is indicative of the actual time of the echo,
Figure RE-DEST_PATH_IMAGE009
a wave-emitting time stamp representing a wave-emitting channel;
Figure RE-682456DEST_PATH_IMAGE010
a timestamp representing a current receive channel;
thus, the actual distance value is obtained according to the calculated temperature error and the time error, namely:
Figure RE-DEST_PATH_IMAGE011
Figure RE-442602DEST_PATH_IMAGE012
Figure RE-240794DEST_PATH_IMAGE014
representing the actual distance of the target object from the car,
Figure RE-846219DEST_PATH_IMAGE016
representing the actual transmission speed of the ultrasonic wave; the actual distance value signal is transmitted to the control unit.
3. The method for identifying the target of the intelligent automobile in road driving according to claim 1, wherein in step 2, the infrared sensor and the camera in the sensor unit perform specific information position detection on the target object, and the specific steps are as follows:
step 2-1, processing images of information collected by the infrared sensor and the camera;
step 2-2, positioning a target object in the image and extracting characteristics;
and 2-3, carrying out image fusion according to the proposed features and outputting an image information value control unit.
4. The method as claimed in claim 2, wherein the image processing operation is an operation of performing image gray scale conversion, image denoising and image enhancement on the image collected by the infrared sensor and the camera, thereby removing the internal noise of the image and enhancing the information contrast; the specific process is as follows:
firstly, carrying out gray level conversion;
secondly, carrying out grayscale image denoising treatment;
and thirdly, carrying out image enhancement.
5. The method for identifying the target of the intelligent automobile during road driving is characterized in that the infrared gray image and the visible light gray image which are output are subjected to target object detection, positioning and feature extraction in the image, so that specific features of the target object are extracted; the method comprises the following specific steps:
firstly, positioning an image target;
and secondly, extracting the characteristics of the target object.
6. The method for identifying the target for the intelligent automobile to drive on the road is characterized in that the output infrared image characteristic and the output visible light image characteristic are fused, and the huge difference in the imaging principle causes that the information type difference contained in the infrared image and the visible light image is also huge, so that the fused image has more various information types compared with a single-mode image, the fused image is beneficial to improving the stability and the accuracy of the target detection and identification process, and the adaptability of an automobile auxiliary driving system is greatly improved; the specific method comprises the following steps:
carrying out weighted average on gray values corresponding to pixels of the original image according to a set weight value, thereby obtaining a fused image:
Figure RE-DEST_PATH_IMAGE017
Figure RE-780677DEST_PATH_IMAGE018
the fused image is represented as a result of the fusion,
Figure RE-DEST_PATH_IMAGE019
which represents an infrared image, is shown,
Figure RE-711723DEST_PATH_IMAGE020
a visible light image is represented by a visible light image,
Figure RE-DEST_PATH_IMAGE021
representing the weight coefficients of infrared and visible light, respectively, and
Figure RE-908130DEST_PATH_IMAGE022
7. an object recognition system for intelligent vehicle road driving, comprising:
a sensor unit for detecting a target object;
the integrated computing and processing unit is used for processing the data of the collected image and the reflected ultrasonic wave and extracting the characteristic information of the target object;
the fusion processing unit is used for fusing the acquired images so as to ensure the accuracy and stability of the information;
the control unit is used for carrying out transmission communication of information;
and the display unit is used for presenting the target object image.
8. The system for identifying the target of the intelligent automobile during road driving according to claim 7, wherein the sensor unit is composed of an ultrasonic sensor, an infrared sensor and a camera.
9. The system for identifying the target of the intelligent automobile during road driving according to claim 7, wherein a filtering module is arranged inside the sensor unit, and the filtering module comprises: the circuit comprises a resistor R1, a resistor R2, a resistor R3, a resistor R4, a resistor R5, a resistor R6, a resistor R7, a capacitor C1, a capacitor C2, a capacitor C3, a capacitor C4, a capacitor C5, a diode D1, a diode D2, a triode Q1, a MOS tube Q2 and a rheostat RV 1;
one end of the capacitor C1 is connected to one end of the resistor R3 and receives a signal, the other end of the capacitor C1 is connected to one end of the resistor R1 and the base of the transistor Q1, the collector of the transistor Q1 is connected to the other end of the resistor R1, one end of the capacitor C2 and one end of the resistor R2, the transmitter of the transistor Q1 is connected to one end of the resistor R7, the other end of the capacitor C2 is connected to the anode of the diode D1 and the cathode of the diode D2, the anode of the diode D2 is connected to one end of the capacitor C3, one end of the resistor R4, one end of the varistor RV1 and the gate of the MOS transistor Q2, the source of the MOS transistor Q2 is connected to one end of the capacitor C4, the drain of the MOS transistor Q2 is connected to one end of the capacitor C5, one end of the resistor R5 and one end of the resistor R6, the other end and the control end of the rheostat RV1 are connected with the other end of the resistor R2 and the other end of the resistor R5 at the same time and input working voltage, the other end of the capacitor C4 is connected with the other end of the resistor R3 and one end of the capacitor C6 at the same time, the other end of the resistor R6 is connected with the other end of the capacitor C5, the other end of the resistor R4, the other end of the capacitor C3, the negative electrode of the diode D1 and the other end of the resistor R7 at the same time and grounded, and the other end of the capacitor C6 outputs signals.
10. The system as claimed in claim 9, wherein the resistor R3, the capacitor C4 and the MOS transistor Q2 form a filter circuit, when the input signal is weak, the voltage output from the rectifier circuit formed by the diode D1, the diode D2 and the capacitor C3 is small, the MOS transistor Q2 is turned on and deepens, the high frequency noise in the input signal is filtered by the resistor R3 and the capacitor C4, when the input signal is strong, the negative voltage rectified and output by the diode D1, the diode D2 and the capacitor C3 is increased, the conduction of the MOS transistor Q2 is weakened or cut off, the attenuation effect of the resistor R3 and the capacitor C4 on the high frequency signal is reduced or not weakened, and the varistor RV1 controls the noise suppression action level.
CN202210420131.XA 2022-04-20 2022-04-20 Target identification system and method for intelligent automobile road driving Pending CN115131980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210420131.XA CN115131980A (en) 2022-04-20 2022-04-20 Target identification system and method for intelligent automobile road driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210420131.XA CN115131980A (en) 2022-04-20 2022-04-20 Target identification system and method for intelligent automobile road driving

Publications (1)

Publication Number Publication Date
CN115131980A true CN115131980A (en) 2022-09-30

Family

ID=83376376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210420131.XA Pending CN115131980A (en) 2022-04-20 2022-04-20 Target identification system and method for intelligent automobile road driving

Country Status (1)

Country Link
CN (1) CN115131980A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2785434A1 (en) * 1998-11-03 2000-05-05 Renault Motor vehicle driving aid using video images has field of view of cameras altered in real time depending on surroundings
CN203134149U (en) * 2012-12-11 2013-08-14 武汉高德红外股份有限公司 Vehicle auxiliary driving system based on different wave band imaging fusion image processing
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN109558848A (en) * 2018-11-30 2019-04-02 湖南华诺星空电子技术有限公司 A kind of unmanned plane life detection method based on Multi-source Information Fusion
CN112406702A (en) * 2019-08-22 2021-02-26 胡月华 Driving assistance system and method for enhancing driver's eyesight
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device
CN215581078U (en) * 2021-06-09 2022-01-18 青岛元通电子有限公司 RC filter circuit with adjustable cut-off frequency

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2785434A1 (en) * 1998-11-03 2000-05-05 Renault Motor vehicle driving aid using video images has field of view of cameras altered in real time depending on surroundings
CN203134149U (en) * 2012-12-11 2013-08-14 武汉高德红外股份有限公司 Vehicle auxiliary driving system based on different wave band imaging fusion image processing
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN109558848A (en) * 2018-11-30 2019-04-02 湖南华诺星空电子技术有限公司 A kind of unmanned plane life detection method based on Multi-source Information Fusion
CN112406702A (en) * 2019-08-22 2021-02-26 胡月华 Driving assistance system and method for enhancing driver's eyesight
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing
CN215581078U (en) * 2021-06-09 2022-01-18 青岛元通电子有限公司 RC filter circuit with adjustable cut-off frequency
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张海焕 等: "基于超声波雷达的自动泊车自适应测距与定位设计", 《科技视界》, vol. 28, no. 1, pages 2 *

Similar Documents

Publication Publication Date Title
CN110532896B (en) Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN107392103B (en) Method and device for detecting road lane line and electronic equipment
CN106951879B (en) Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
CN111247525A (en) Lane detection method and device, lane detection equipment and mobile platform
US9292750B2 (en) Method and apparatus for detecting traffic monitoring video
US7545956B2 (en) Single camera system and method for range and lateral position measurement of a preceding vehicle
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
RU2571368C1 (en) Device for detecting three-dimensional objects, method of detecting three-dimensional objects
CN107590470B (en) Lane line detection method and device
CN107891808B (en) Driving reminding method and device and vehicle
CN104392629B (en) The method and apparatus of inspection vehicle distance
CN106022243B (en) A kind of retrograde recognition methods of the car lane vehicle based on image procossing
Rezaei et al. Vehicle detection based on multi-feature clues and Dempster-Shafer fusion theory
US20060212215A1 (en) System to determine distance to a lead vehicle
CN106128121B (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN111881832A (en) Lane target detection method, device, equipment and computer readable storage medium
CN112101316B (en) Target detection method and system
CN115690061B (en) Vision-based container terminal truck collection detection method
CN114814823A (en) Rail vehicle detection system and method based on integration of millimeter wave radar and camera
CN112927283A (en) Distance measuring method and device, storage medium and electronic equipment
CN115034324A (en) Multi-sensor fusion perception efficiency enhancement method
CN117452410A (en) Millimeter wave radar-based vehicle detection system
CN111091077B (en) Vehicle speed detection method based on image correlation and template matching
Mori et al. Visibility estimation in foggy conditions by in-vehicle camera and radar
CN115131980A (en) Target identification system and method for intelligent automobile road driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination