CN105711501A - Car look-around camera-based car monitoring method and system in dead zone - Google Patents
Car look-around camera-based car monitoring method and system in dead zone Download PDFInfo
- Publication number
- CN105711501A CN105711501A CN201610244615.8A CN201610244615A CN105711501A CN 105711501 A CN105711501 A CN 105711501A CN 201610244615 A CN201610244615 A CN 201610244615A CN 105711501 A CN105711501 A CN 105711501A
- Authority
- CN
- China
- Prior art keywords
- prime
- camera
- point
- image
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/101—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using cameras with adjustable capturing direction
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention is applicable to the field of car monitoring, and provides a car look-around camera-based car monitoring method and system in a dead zone. The system comprises a look-around fisheye camera, a processor and a display, wherein the look-around camera is connected with the processor, and the processor is connected with the display in a bi-direction communication manner. By adopting the method and system, the safety potential hazard at the driving links such as lane changing and doubling of a car due to the insufficient of view field of a car rearview mirror and the dead zones of the visual observation always presenting at two sides of the car can be solved. The view angle difference caused by the imaging of the car at different positions in the dead zone can realize a relatively good detection effect; and meanwhile, the method and the system also have the advantages that a large area of dead zone can be detected, the detection result is more reliable, and the detection speed is relatively high.
Description
Technical field
The invention belongs to automobile monitoring field, particularly relate to vehicle monitoring method and system in a kind of blind area looking around camera based on automobile.
Background technology
In the process that vehicle travels, driver observes the traffic conditions of vehicle side and rear flank side by the rearview mirror of front window both sides.Owing to the visual field of vehicle mirrors is inadequate, can there is the blind area of visualization in vehicle both sides all the time, brings potential safety hazard to the driving link such as vehicle lane change, doubling.In order to eliminate the vehicle blind zone impact on vehicle security drive, currently mainly there is following several blind areas monitoring method.
The one blind area monitoring system being based on ultrasound probe, the echo-signal of barrier in search coverage is delivered to controller by ultrasound probe, and controller calculates the distance of obstacle distance vehicle and sends alarm signal.This product structure is simple, and cost is low, less expensive material benefit.But the usual angle changing rate of signal that ultrasound probe is launched is narrow, it is necessary to expand supervision scope by installing multiple ultrasound probe;The air line distance that can detect is nearer, it is impossible to the region of monitoring certain distance.Owing to being based on echo-signal judges whether it is barrier, therefore cannot be distinguished by isolation strip, limit, the road such as guardrail and trees structure, be also highly susceptible in air the impact of floating body, it is impossible to accurately accomplish only vehicle in blind area to be made warning.
The two blind area monitoring systems being based on microwave or laser ranging technique, similar with the blind area monitoring system based on ultrasound probe, it is also based on echo-signal and calculates the distance with barrier and send alarm signal.Compared with the blind area monitoring system popped one's head in based on overshot ripple, the advantage of this product is to have bigger straight line detection range, but its price also can be high a lot.Owing to being also based on echo-signal judges whether it is barrier, the impact of floating body in limit structure and air can be subject to equally, it is impossible to accurately accomplish only vehicle in blind area to be made warning.
The three blind area monitoring systems being based on photographic head, the fade chart picture that photographic head shooting obtains is sent to controller, and controller passes through pattern recognition scheduling algorithm identification vehicle, and sends alarm signal.Compared with first two product, the blind area monitoring system based on photographic head has bigger motility, it is possible to identify different objects as required, and certainly corresponding algorithm also can be more complicated.What be currently known is in that owing to blind area is generally relatively larger based on the shortcoming of vehicle monitoring method in the blind area of photographic head, and in the image captured, vehicle diverse location in blind area can present bigger visual angle change, makes the difficulty of identification be greatly increased.Additionally, various types of automobile, such as car, MPV (Multi-PurposeVehicles), SUV (SportUtilityVehicle), bus, there is bigger difference due to its size in truck etc., the factors such as aspect ratio change is very big, also can increase the difficulty of identification.
Summary of the invention
It is an object of the invention to provide vehicle monitoring method and system in a kind of blind area looking around camera based on automobile, aim to solve the problem that the vehicle visual angle difference that diverse location imaging causes in blind area, realize good Detection results, the visual field of vehicle mirrors is inadequate, all the time can there is the blind area of visualization in vehicle both sides, to the problem that the driving link such as vehicle lane change, doubling brings potential safety hazard.
The present invention is achieved in that and includes looking around fisheye camera, processor and display unit, said method comprising the steps of:
A, look around fisheye camera catch original fish eye images, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, it is designated as Y-direction vertically upward, passes through equation It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model;
B, carry out rotation and obtain virtual camera looking around fisheye camera, pass through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame;
C, according to perspective projection ultimate principle, obtained the Intrinsic Matrix of virtual camera and outer parameter matrix by equation s ' p=K (RP+t), only consider ground, make Y=0, pass through equationGenerate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, S represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction;
D, according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then makes, make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection;
E, the monitored area image ROI ' that will represent with tire centerline point, pass through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object;
F, to the element in the array A of monitoring point, (u, the detection window represented by v) is (u, v, M, N), input detection window and side elevation image, adopts known technology to detect, and whether finally export is target object.
The further technical scheme of the present invention is: further comprise the steps of: in described step A
A1, obtain looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.
The further technical scheme of the present invention is: further comprise the steps of: in described step B
B1, rotate and look around fisheye camera and obtain an optical axis and detect the vertical virtual camera of plane, can be eliminated the image after perspective distortion by virtual camera;
B2, in the normal driving process of vehicle, virtual video camera is pointed on the left of Current vehicle or right side, the undistorted tire image of side back car can be obtained.
The further technical scheme of the present invention is: further comprise the steps of: in described step F
F1, detection window is carried out feature extraction and calculating, then calculated eigenvalue is inputed to neutral net and carry out the judgement of grader.
The further technical scheme of the present invention is: the feature extraction in described step F1 includes Haar rectangular characteristic or LBP feature or HOG feature, and the grader in described step F1 is cascade Weak Classifier or SVM classifier.
Another object of the present invention is to provide vehicle monitoring system in a kind of blind area looking around camera based on automobile, described system includes looking around fisheye camera, processor and display unit, described camera of looking around is connected with described processor, described processor is connected with described display unit both-way communication, it is characterized in that
Described fluoroscopy images module, original fish eye images is caught for looking around fisheye camera, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, is designated as Y-direction vertically upward, passes through equation It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model;
Described side elevation image module, for carrying out rotation and obtain virtual camera looking around fisheye camera, passes through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame;
Described distance matrix module, for according to perspective projection ultimate principle, obtaining the Intrinsic Matrix of virtual camera and outer parameter matrix, only consider ground, make Y=0, pass through equation by equation s ' p=K (RP+t)Generate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, s represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction;
Described monitored area image ROI ' module, for according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection;
Described monitoring point array module, for the monitored area image ROI ' that will represent with tire centerline point, passes through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object;
Described detection output module, for the element in the array A of monitoring point, (u, the detection window represented by v) is (u, v, M, N), inputs detection window and side elevation image, adopts known technology to detect, and whether finally export is target object.
The further technical scheme of the present invention is: also include demarcating unit;Described demarcating module, for obtaining looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.
The further technical scheme of the present invention is: also include the elementary area after eliminating perspective distortion and undistorted tire image unit;Elementary area after described elimination perspective distortion, looks around fisheye camera and obtains an optical axis for rotating and detect the vertical virtual camera of plane, can be eliminated the image after perspective distortion by virtual camera;Described undistorted tire image unit, for, in the normal driving process of vehicle, pointing on the left of Current vehicle or right side, can obtain the undistorted tire image of side back car by virtual video camera.
The further technical scheme of the present invention is: also include judging unit;Described judging unit, for detection window is carried out feature extraction and calculating, then inputs to neutral net by calculated eigenvalue and carries out the judgement of grader.
The further technical scheme of the present invention is: described in look around fisheye camera expansible.
The invention has the beneficial effects as follows: solve the vehicle visual angle difference that diverse location imaging causes in blind area, realize good Detection results, also there is the larger range of blind area of detection simultaneously, more reliable detection result, and the advantage of detection speed, bring huge use to the driving link such as vehicle lane change, doubling simultaneously.
Accompanying drawing explanation
Fig. 1 is vehicle monitoring method flow block diagram in a kind of blind area looking around camera based on automobile that the embodiment of the present invention provides;
Fig. 2 is that the embodiment of the present invention provides vehicle monitoring system block diagram in a kind of blind area looking around camera based on automobile.
Detailed description of the invention
Accompanying drawing labelling:
Fig. 1 illustrates vehicle monitoring method in a kind of blind area looking around camera based on automobile provided by the invention, aim to solve the problem that the vehicle visual angle difference that diverse location imaging causes in blind area, realize good Detection results, the visual field of vehicle mirrors is inadequate, all the time can there is the blind area of visualization in vehicle both sides, to the problem that the driving link such as vehicle lane change, doubling brings potential safety hazard.
The present invention is achieved in that a kind of vehicle monitoring method in blind area looking around camera based on automobile, including looking around fisheye camera, processor and display unit, comprises the following steps according to method flow diagram:
Step S1, look around fisheye camera and catch original fish eye images, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, is designated as Y-direction vertically upward, obtains looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.Pass through equation It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model.
Step S2, carries out rotation obtain virtual camera to looking around fisheye camera, rotates and looks around fisheye camera and obtain the virtual camera that an optical axis is vertical with detecting plane, can be eliminated the image after perspective distortion by virtual camera.In the normal driving process of vehicle, virtual video camera is pointed on the left of Current vehicle or right side, the undistorted tire image of side back car can be obtained.Pass through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame.
Step S3, according to perspective projection ultimate principle, is obtained the Intrinsic Matrix of virtual camera and outer parameter matrix, only considers ground, make Y=0, pass through equation by equation s ' p=K (RP+t)Generate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, s represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction.
Step S4, according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection.
Step S5, the monitored area image ROI ' that will represent with tire centerline point, pass through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object.
Step S6, to the element (u in the array A of monitoring point, v) detection window represented by is (u, v, M, N), input detection window and side elevation image, detection window is carried out feature extraction and calculating, then calculated eigenvalue is inputed to neutral net and carry out the judgement of grader.Feature extraction includes Haar rectangular characteristic or LBP feature or HOG feature.Grader is cascade Weak Classifier or SVM classifier.Whether adopt known technology to detect, finally exporting is target object.
Vehicle monitoring system in a kind of blind area looking around camera based on automobile as shown in Figure 2, described system includes looking around fisheye camera, processor and display unit, described camera of looking around is connected with described processor, and described processor is connected with described display unit both-way communication.
Described fluoroscopy images module, original fish eye images is caught for looking around fisheye camera, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, is designated as Y-direction vertically upward, passes through equation It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model.
Described demarcation unit, for obtaining looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.Also include the image module after eliminating perspective distortion and undistorted tire image module.
Described side elevation image module, for carrying out rotation and obtain virtual camera looking around fisheye camera, passes through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame.
Elementary area after described elimination perspective distortion, looks around fisheye camera and obtains an optical axis for rotating and detect the vertical virtual camera of plane, can be eliminated the image after perspective distortion by virtual camera.
Described undistorted tire image unit, for, in the normal driving process of vehicle, pointing on the left of Current vehicle or right side, can obtain the undistorted tire image of side back car by virtual video camera.
Described distance matrix module, for according to perspective projection ultimate principle, obtaining the Intrinsic Matrix of virtual camera and outer parameter matrix, only consider ground, make Y=0, pass through equation by equation s ' p=K (RP+t)Generate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, S represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction.
Described monitored area image ROI ' module, for according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection.
Described monitoring point array module, for the monitored area image ROI ' that will represent with tire centerline point, passes through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object.
Described detection output module, for the element in the array A of monitoring point, (u, the detection window represented by v) is (u, v, M, N), inputs detection window and side elevation image, adopts known technology to detect, and whether finally export is target object.
Described judging unit, for detection window is carried out feature extraction and calculating, then inputs to neutral net by calculated eigenvalue and carries out the judgement of grader.
It is described that to look around fisheye camera expansible.
Solve the vehicle visual angle difference that diverse location imaging causes in blind area, realize good Detection results, also there is the larger range of blind area of detection, more reliable detection result simultaneously, and the advantage of detection speed, bring huge use to the driving link such as vehicle lane change, doubling simultaneously.
By automobile being looked around the side elevation image that the image of fisheye camera is converted to the perspective camera of horizontally toward side, vehicle detection is converted to the detection of tire, thus eliminating the imaging difference of the different visual angles that object to be detected diverse location in blind area causes, it is achieved detection accuracy and reliability preferably.Native system realizes necessarily depending on 3 cameras, and description below illustrates with the realization of 3 the most comprehensive, monitoring area is maximum cameras, gives the example based on 1 camera or two cameras in embodiment equally.
The original fish eye images respectively F of fisheye camera output is looked around on automobile left side, right side and rear side 3 tunnel1, F2And F3, definition bodywork reference frame is: with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, being vertically designated as forward X-direction, level is designated as to the right Z-direction, is designated as Y-direction vertically upward.The Intrinsic Matrix K of the fisheye camera of left side, right side and rear side is obtained by known Zhang Zhengyou standardizition1, K2And K3, radial distortion parameter and tangential distortion parameter, and camera is relative to the outer ginseng matrix [R of bodywork reference frame1, t1], [R2, t2] and [R3, t3]。
Original fish eye images is corrected to (3) according to equation (1), the fluoroscopy images after being corrected.Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5And k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model.The fish eye images F of left side, right side and rear side1, F2And F3Fluoroscopy images after correction respectively I '1, I '2With I '3;
Rotate looking around flake machine, obtain an optical axis virtual video camera vertical with plane interested.By this virtual video camera, it is possible to the image after the perspective distortion that is eliminated.In the normal driving process of vehicle, side back car is parallel with the travel direction of Current vehicle, and virtual video camera points on the left of Current vehicle or right side, can obtain the undistorted tire image of side back car.When real camera does not have relative translation with virtual camera, real camera and virtual camera can be mapped according to equation (4).
Wherein pVAnd p0The respectively point on virtual camera and real camera image, s is a zoom factor, KVAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, R0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame.
According to equation (4), by the left-side images I ' after correction1With image right I '2It is respectively converted into the side elevation image I in left side1Side elevation image I with right side2, the monitored area on corresponding vehicle left side and right side respectively;By the rear image I ' after correction3Be converted to two side elevation image, wherein left side be designated as I3, right side be designated as I4, the monitored area of corresponding vehicle left rear side and right lateral side respectively.Original fish eye images is converted to side image, not only eliminates the distortion of fish eye images, also make the target object originally needing detection various visual angles become single visual angle, improve Detection results and real-time.
According to perspective projection ultimate principle, there is equation (5).Wherein P (XYZ)TRepresent bodywork reference frame next one three-dimensional point, p (uv1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, s represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix.
S ' p=K (RP+t) (5)
Only consider ground, make Y=0, it is possible to obtain:
According to equation (6), the pixel coordinate of input side elevation image, it is possible to calculate the X-direction under s ' and bodywork reference frame and coordinate on Z-direction the two direction (x, z).To 4 tunnel side elevation image I1, I2, I3And I4Generate the Distance matrix D under corresponding bodywork reference frame1, D2, D3And D4.Distance matrix and side elevation image have identical resolution, and wherein the value of each point represents the distance in X-direction and Z-direction on ground under bodywork reference frame of the point in corresponding side elevation image.
The sphere of action of blind area monitoring is generally in the side of vehicle and rear flank side, and this scope upright projection on the ground can represent with a rectangular area:
Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum.
For 4 Distance matrix Ds1, D2, D3And D4, generate monitored area image respectively, be designated as ROI '1, ROI '2, ROI '3With ROI '4.The flow process generated is as follows: each point in traversal Distance matrix D, for certain point (u, v), its value is that (x, z), if this value meets inequation (7), then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0.
Monitored area image and distance matrix have identical resolution, and wherein the value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection.By generating monitored area image, only at the region detection target object of monitoring, it is possible to greatly accelerate that follow-up phase carries out the time of object detection.
The rectangular window needing input to represent with top left co-ordinate and width height when generally carrying out feature extraction and target classification, therefore converts the monitored area ROI ' that the following central point with tire minimum enclosed rectangle represents to represent with the top left co-ordinate of this rectangle monitored area ROI here.The detection window making target object is sized to M × N pixel, according to equation (8), according to from top to bottom, order from left to right, the above monitored area ROI ' represented with tire centerline point is changed into the monitored area ROI represented with window top left co-ordinate.
Simultaneously, monitored area ROI can also be converted into the array A of monitoring point and carry out concrete object detection again, so have only to the array A of traversal monitoring point when object detection, without traveling through whole image and judging in conjunction with monitored area ROI, promote system real time further.The method adopted is: for some ROI, according to from top to bottom, each point of order traversal from left to right, and by the position of point that value is 255, (u, v) joins in array A, the array of 4 monitoring points of composition, is designated as A1, A2, A3And A4.After obtaining the array of monitoring point, follow-up object detection has only to travel through this array.
To monitoring array A in element (u, the detection window represented by v) is (u, v, M, N), input detection window and side elevation image, wheel detection employing known technology: first detection window is carried out feature extraction and calculating, it is generally adopted Haar rectangular characteristic, LBP feature or HOG feature, then input to neutral net, cascade Weak Classifier by calculated eigenvalue, or the grader such as SVM judges, whether output is target object.The grader used is good generally according to the data training in advance of mark.
Employing is arranged on 3 of automobile left side, right side and rear side and looks around fisheye camera and realize native system.The original fish eye images F of collected by camera, image resolution ratio is 640 × 480 pixels.It is respectively as follows: by demarcating the Intrinsic Matrix obtained
Side elevation image I after conversion, image resolution ratio is 400 × 160 pixels.
Blind area monitoring range is set to:
Side elevation image corresponding to left camera is X-direction [-4m ,-1.5m], Z-direction [-1m, 3m].
Side elevation image corresponding to right camera is X-direction [0.5m, 3.0m], Z-direction [-1m, 3m].
The side elevation image in the left side that rear side camera is corresponding is X-direction [-4m ,-1.5m], Z-direction [-4m, 0m].
The side elevation image on the right side that rear side camera is corresponding is X-direction [0.5m, 3.0m], Z-direction [-4m, 0m].
By the determined monitored area ROI of above-mentioned blind area monitoring range.
The known cascade classifier based on Haar rectangular characteristic will be adopted when side view carries out tire checking, can be seen that testing result, it is arranged on the camera of automobile side and side elevation image that camera below obtains is complementary on detection region, and have certain overlapping region.In the image that the camera of side obtains, from rearview mirror to the tailstock, this region is more visible, starts to fog from tailstock region sternward again;In the image that camera below obtains more visible from the tailstock 3 to 5 meters of these regions backward, then start to fog backward.The advantage adopting 3 fisheye cameras is to have bigger monitoring range, and there is certain overlapping region blind area, left and right, makes detection more reliable.
Native system is realized only with two fisheye cameras being arranged on automobile left side and right side.The blind area in left camera left side corresponding to the side elevation image difference after the conversion of right camera and the blind area on right side.It is identical that other flow processs look around fisheye camera with installation 3.
Native system is realized only with the fisheye camera being arranged on automotive back.The fisheye camera of rear side can be simultaneously converted into the side elevation image dough-making powder side elevation image to the right towards left side, respectively the blind area of the blind area of corresponding left rear side and right lateral side.It is identical that other flow processs look around fisheye camera with installation 3.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all any amendment, equivalent replacement and improvement etc. made within the spirit and principles in the present invention, should be included within protection scope of the present invention.
Claims (10)
1. vehicle monitoring method in the blind area looking around camera based on automobile, it is characterised in that include looking around fisheye camera, processor and display unit, said method comprising the steps of:
A, look around fisheye camera catch original fish eye images, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, it is designated as Y-direction vertically upward, passes through equation
It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model;
B, carry out rotation and obtain virtual camera looking around fisheye camera, pass through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame;
C, according to perspective projection ultimate principle, obtained the Intrinsic Matrix of virtual camera and outer parameter matrix by equation s ' p=K (RP+t), only consider ground, make Y=0, pass through equationGenerate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, S represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction;
D, according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection;
E, the monitored area image ROI ' that will represent with tire centerline point, pass through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object;
F, to the element in the array A of monitoring point, (u, the detection window represented by v) is (u, v, M, N), input detection window and side elevation image, adopts known technology to detect, and whether finally export is target object.
2. vehicle monitoring method in blind area according to claim 1, it is characterised in that further comprise the steps of: in described step A
A1, obtain looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.
3. vehicle monitoring method in blind area according to claim 1, it is characterised in that further comprise the steps of: in described step B
B1, rotate and look around fisheye camera and obtain an optical axis and detect the vertical virtual camera of plane, can be eliminated the image after perspective distortion by virtual camera;
B2, in the normal driving process of vehicle, virtual video camera is pointed on the left of Current vehicle or right side, the undistorted tire image of side back car can be obtained.
4. vehicle monitoring method in blind area according to claim 1, it is characterised in that further comprise the steps of: in described step F
F1, detection window is carried out feature extraction and calculating, then calculated eigenvalue is inputed to neutral net and carry out the judgement of grader.
5. vehicle monitoring method in blind area according to claim 4, it is characterised in that the feature extraction in described step F1 includes Haar rectangular characteristic or LBP feature or HOG feature, the grader in described step F1 is cascade Weak Classifier or SVM classifier.
6. vehicle monitoring system in the blind area looking around camera based on automobile, described system includes looking around fisheye camera, processor and display unit, described in look around camera and be connected with described processor, described processor is connected with described display unit both-way communication, it is characterized in that
Described fluoroscopy images module, original fish eye images is caught for looking around fisheye camera, set up bodywork reference frame, with the following central point of the minimum enclosed rectangle of automobile upright projection on the ground for bodywork reference frame initial point, vertically it is designated as forward X-direction, level is designated as to the right Z-direction, is designated as Y-direction vertically upward, passes through equation
It is corrected, the fluoroscopy images after being corrected;Wherein (u ', v ') is the pixel coordinate on the fluoroscopy images after correction, (u0, v0) for principal point pixel coordinate, fxAnd fyThe respectively focal length in image level direction and vertical direction, k1, k2, k3, k4, k5, k6For radial distortion parameter, p1And p2For tangential distortion parameter, r2=x '2+y′2, (u, v) for the respective pixel coordinate on original image, (x ', y ') is the coordinate of the actual picture point in units of millimeter, (x ", y ") is the ideal value of the picture point coordinate calculated by perspective model;
Described side elevation image module, for carrying out rotation and obtain virtual camera looking around fisheye camera, passes through equationReal camera and virtual camera are mapped, generates side elevation image;Wherein PvAnd P0The respectively point on virtual camera and real camera image, S is a zoom factor, KvAnd K0The respectively Intrinsic Matrix of virtual camera and real camera, K0VRepresent the spin matrix between two cameras, R0Represent the real camera spin matrix relative to reference frame, RVRepresent the virtual camera spin matrix relative to bodywork reference frame;
Described distance matrix module, for according to perspective projection ultimate principle, obtaining the Intrinsic Matrix of virtual camera and outer parameter matrix, only consider ground, make Y=0, pass through equation by equation s ' p=K (RP+t)Generate ground distance matrix;Wherein P (X, Y, Z)TRepresent bodywork reference frame next one three-dimensional point, p (u, v, 1)TRepresenting the homogeneous coordinates of pixel under corresponding image coordinate system, s represents zoom factor, K, and R and t is the internal reference matrix of virtual camera and outer ginseng matrix, and (x, z) for the coordinate in the X-direction under bodywork reference frame and Z-direction the two direction;
Described monitored area image ROI ' module, for according to distance matrix upright projection rectangular area on the groundRepresenting, rectangular area, relative to distance matrix, generates the monitored area image ROI ' represented with tire centerline point;Traversal distance matrix in each point, for certain point (u, v), its value be (x, z), if this value meets inequation, then make ROI ' (u, v)=255, otherwise ROI ' (u, v)=0;Wherein, X-And X+It is that under default bodywork reference frame, X-direction needs the minima of monitoring and the threshold value of maximum, Z-And Z+It is that under default bodywork reference frame, Z-direction needs the minima of monitoring and the threshold value of maximum;The value of each point is that 0 or 255,0 these points of expression are made without follow-up object detection, and 255 represent that these points need to carry out follow-up object detection;
Described monitoring point array module, for the monitored area image ROI ' that will represent with tire centerline point, passes through equationConvert the monitored area image ROI represented with the top left co-ordinate of this rectangle to, and then convert monitoring point array A to and carrying out concrete object detection;Wherein, M, N are the detection window size M × N pixel of target object;
Described detection output module, for the element in the array A of monitoring point, (u, the detection window represented by v) is (u, v, M, N), inputs detection window and side elevation image, adopts known technology to detect, and whether finally export is target object.
7. vehicle monitoring system in blind area according to claim 6, it is characterised in that also include demarcating unit;
Described demarcation unit, for obtaining looking around the Intrinsic Matrix of fisheye camera, radial distortion parameter, tangential distortion parameter and looking around the fisheye camera outer parameter matrix relative to bodywork reference frame by Zhang Zhengyou standardizition.
8. vehicle monitoring system in blind area according to claim 6, it is characterised in that also include the elementary area after eliminating perspective distortion and undistorted tire image unit;
Elementary area after described elimination perspective distortion, looks around fisheye camera and obtains an optical axis for rotating and detect the vertical virtual camera of plane, can be eliminated the image after perspective distortion by virtual camera;
Described undistorted tire image unit, for, in the normal driving process of vehicle, pointing on the left of Current vehicle or right side, can obtain the undistorted tire image of side back car by virtual video camera.
9. vehicle monitoring system in blind area according to claim 6, it is characterised in that also include judging unit;
Described judging unit, for detection window is carried out feature extraction and calculating, then inputs to neutral net by calculated eigenvalue and carries out the judgement of grader.
10. vehicle monitoring system in the blind area according to any one of claim 6-9, it is characterised in that described in look around fisheye camera expansible.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610244615.8A CN105711501B (en) | 2016-04-19 | 2016-04-19 | Vehicle monitoring method and system in a kind of blind area that camera is looked around based on automobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610244615.8A CN105711501B (en) | 2016-04-19 | 2016-04-19 | Vehicle monitoring method and system in a kind of blind area that camera is looked around based on automobile |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105711501A true CN105711501A (en) | 2016-06-29 |
CN105711501B CN105711501B (en) | 2017-10-31 |
Family
ID=56160392
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610244615.8A Active CN105711501B (en) | 2016-04-19 | 2016-04-19 | Vehicle monitoring method and system in a kind of blind area that camera is looked around based on automobile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105711501B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106740469A (en) * | 2016-11-04 | 2017-05-31 | 惠州市德赛西威汽车电子股份有限公司 | The detection method of blind area prior-warning device |
CN106846410A (en) * | 2016-12-20 | 2017-06-13 | 北京鑫洋泉电子科技有限公司 | Based on three-dimensional environment imaging method and device |
CN106886759A (en) * | 2017-01-22 | 2017-06-23 | 西安科技大学 | It is a kind of be applied to large truck go blind area safety driving system and method |
CN107577988A (en) * | 2017-08-03 | 2018-01-12 | 东软集团股份有限公司 | Realize the method, apparatus and storage medium, program product of side vehicle location |
CN108121941A (en) * | 2016-11-30 | 2018-06-05 | 上海联合道路交通安全科学研究中心 | A kind of object speed calculation method based on monitoring device |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108647638A (en) * | 2018-05-09 | 2018-10-12 | 东软集团股份有限公司 | A kind of vehicle location detection method and device |
CN108764115A (en) * | 2018-05-24 | 2018-11-06 | 东北大学 | A kind of truck danger based reminding method |
CN110610523A (en) * | 2018-06-15 | 2019-12-24 | 杭州海康威视数字技术股份有限公司 | Automobile look-around calibration method and device and computer readable storage medium |
CN111559314A (en) * | 2020-04-27 | 2020-08-21 | 长沙立中汽车设计开发股份有限公司 | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011095321A (en) * | 2009-10-27 | 2011-05-12 | Toshiba Alpine Automotive Technology Corp | Image display device for vehicle |
CN103942532A (en) * | 2014-03-14 | 2014-07-23 | 吉林大学 | Dead zone vehicle detecting method based on vehicle-mounted camera |
CN104954663A (en) * | 2014-03-24 | 2015-09-30 | 东芝阿尔派·汽车技术有限公司 | Image processing apparatus and image processing method |
-
2016
- 2016-04-19 CN CN201610244615.8A patent/CN105711501B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011095321A (en) * | 2009-10-27 | 2011-05-12 | Toshiba Alpine Automotive Technology Corp | Image display device for vehicle |
CN103942532A (en) * | 2014-03-14 | 2014-07-23 | 吉林大学 | Dead zone vehicle detecting method based on vehicle-mounted camera |
CN104954663A (en) * | 2014-03-24 | 2015-09-30 | 东芝阿尔派·汽车技术有限公司 | Image processing apparatus and image processing method |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106740469A (en) * | 2016-11-04 | 2017-05-31 | 惠州市德赛西威汽车电子股份有限公司 | The detection method of blind area prior-warning device |
CN106740469B (en) * | 2016-11-04 | 2019-06-21 | 惠州市德赛西威汽车电子股份有限公司 | The detection method of blind area prior-warning device |
CN108121941A (en) * | 2016-11-30 | 2018-06-05 | 上海联合道路交通安全科学研究中心 | A kind of object speed calculation method based on monitoring device |
CN106846410A (en) * | 2016-12-20 | 2017-06-13 | 北京鑫洋泉电子科技有限公司 | Based on three-dimensional environment imaging method and device |
CN106846410B (en) * | 2016-12-20 | 2020-06-19 | 北京鑫洋泉电子科技有限公司 | Driving environment imaging method and device based on three dimensions |
CN106886759A (en) * | 2017-01-22 | 2017-06-23 | 西安科技大学 | It is a kind of be applied to large truck go blind area safety driving system and method |
US10495754B2 (en) | 2017-08-03 | 2019-12-03 | Neusoft Corporation | Method, apparatus, storage medium and program product for side vehicle positioning |
CN107577988A (en) * | 2017-08-03 | 2018-01-12 | 东软集团股份有限公司 | Realize the method, apparatus and storage medium, program product of side vehicle location |
CN107577988B (en) * | 2017-08-03 | 2020-05-26 | 东软集团股份有限公司 | Method, device, storage medium and program product for realizing side vehicle positioning |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108537844B (en) * | 2018-03-16 | 2021-11-26 | 上海交通大学 | Visual SLAM loop detection method fusing geometric information |
CN108647638A (en) * | 2018-05-09 | 2018-10-12 | 东软集团股份有限公司 | A kind of vehicle location detection method and device |
US10783657B2 (en) | 2018-05-09 | 2020-09-22 | Neusoft Corporation | Method and apparatus for vehicle position detection |
CN108647638B (en) * | 2018-05-09 | 2021-10-12 | 东软睿驰汽车技术(上海)有限公司 | Vehicle position detection method and device |
CN108764115A (en) * | 2018-05-24 | 2018-11-06 | 东北大学 | A kind of truck danger based reminding method |
CN108764115B (en) * | 2018-05-24 | 2021-12-14 | 东北大学 | Truck danger reminding method |
CN110610523A (en) * | 2018-06-15 | 2019-12-24 | 杭州海康威视数字技术股份有限公司 | Automobile look-around calibration method and device and computer readable storage medium |
CN111559314A (en) * | 2020-04-27 | 2020-08-21 | 长沙立中汽车设计开发股份有限公司 | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method |
Also Published As
Publication number | Publication date |
---|---|
CN105711501B (en) | 2017-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105711501A (en) | Car look-around camera-based car monitoring method and system in dead zone | |
US10783657B2 (en) | Method and apparatus for vehicle position detection | |
US8199975B2 (en) | System and method for side vision detection of obstacles for vehicles | |
CN108638999B (en) | Anti-collision early warning system and method based on 360-degree look-around input | |
US8232872B2 (en) | Cross traffic collision alert system | |
US6834232B1 (en) | Dual disimilar sensing object detection and targeting system | |
Broggi et al. | A new approach to urban pedestrian detection for automatic braking | |
CN105678787A (en) | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera | |
CN109435852A (en) | A kind of panorama type DAS (Driver Assistant System) and method for large truck | |
US9927811B1 (en) | Control system and method for controlling mobile warning triangle | |
CN110077399A (en) | A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection | |
US20050271254A1 (en) | Adaptive template object classification system with a template generator | |
CN110816527A (en) | Vehicle-mounted night vision safety method and system | |
CN112215306A (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
CN110065494A (en) | A kind of vehicle collision avoidance method based on wheel detection | |
CN101396989A (en) | Vehicle periphery monitoring apparatus and image displaying method | |
CN108021899A (en) | Vehicle intelligent front truck anti-collision early warning method based on binocular camera | |
US20210034903A1 (en) | Trailer hitching assist system with trailer coupler detection | |
Suzuki et al. | Sensor fusion-based pedestrian collision warning system with crosswalk detection | |
CN105059190A (en) | Vision-based automobile door-opening bump early-warning device and method | |
DE102021131051B4 (en) | Image recording system and image recording device | |
CN107145825A (en) | Ground level fitting, camera calibration method and system, car-mounted terminal | |
CN113905176A (en) | Panoramic image splicing method, driving assisting method and device and vehicle | |
CN103673977B (en) | The method and apparatus of rear dead zone of vehicle detection | |
Michalke et al. | Towards a closer fusion of active and passive safety: Optical flow-based detection of vehicle side collisions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |