CN112208438A - Driving auxiliary image generation method and system - Google Patents
Driving auxiliary image generation method and system Download PDFInfo
- Publication number
- CN112208438A CN112208438A CN201910619488.9A CN201910619488A CN112208438A CN 112208438 A CN112208438 A CN 112208438A CN 201910619488 A CN201910619488 A CN 201910619488A CN 112208438 A CN112208438 A CN 112208438A
- Authority
- CN
- China
- Prior art keywords
- processing module
- vehicle
- dimensional projection
- projection model
- obstacle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/301—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8073—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for vehicle security, e.g. parked vehicle surveillance, burglar detection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8093—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
A driving auxiliary image generation method comprises the following steps: (A) determining whether the obstacle detection result indicates that an obstacle exists; (B) judging whether the distance between the obstacles in the obstacle detection result is greater than a distance threshold value; (C) when the obstacle distance is judged to be smaller than or equal to the distance threshold value, selecting a first three-dimensional projection model; (D) selecting a second three-dimensional projection model when it is determined that the obstacle distance is greater than the distance threshold; (E) mapping the surrounding image to the selected first three-dimensional projection model or the second three-dimensional projection model; and (F) converting the result of the step (E) into a corresponding driving auxiliary image with a first view or a second view.
Description
Technical Field
The present invention relates to an image generating method, and more particularly, to a driving assistance image generating method for a vehicle.
Background
An automobile surround-view display (around-view) system is one of Advanced Driving Assistance System (ADAS) technologies, and can display a bird's-eye view 360-degree panoramic image of a vehicle and its surroundings for a driver in real time to ensure driving safety in parking or other low-speed driving situations. In the conventional automobile panoramic display system, images of the surrounding environment of a vehicle are obtained by a plurality of wide-angle cameras respectively arranged at the front, the rear and the two sides of the vehicle, then the images obtained by the wide-angle cameras are converted into a three-dimensional projection model by a reverse perspective projection conversion according to the fixed three-dimensional projection model, and then a virtual camera performs a perspective projection conversion on the three-dimensional projection model at different viewing angles to obtain two-dimensional images at different observation viewing angles.
However, the shape and size of the three-dimensional projection model used in the conventional car surround view display system are fixed and cannot be dynamically adjusted according to the change of the driving situation, so that the use experience cannot be improved.
Disclosure of Invention
The invention aims to provide a driving auxiliary image generation method and a driving auxiliary image generation system, which can dynamically adjust a function and parameters of a three-dimensional projection model according to the change of a driving situation so as to provide better use experience.
The driving auxiliary image generating method is suitable for a vehicle and is implemented by a processing module, the processing module is electrically connected with a shooting unit and an obstacle detecting module, the shooting unit is used for generating and transmitting at least one peripheral image related to the peripheral environment of the vehicle to the processing module, the obstacle detecting module is used for generating and transmitting an obstacle detecting result related to an obstacle corresponding to the vehicle to the processing module, and the driving auxiliary image generating method comprises the following steps:
(A) the processing module judges whether the obstacle detection result indicates that the obstacle exists or not;
(B) when the processing module determines that the obstacle exists, wherein the obstacle detection result comprises an obstacle distance between the obstacle and the vehicle, the processing module determines whether the obstacle distance is greater than a distance threshold value;
(C) when the processing module judges that the distance between the obstacles is less than or equal to the distance threshold, the processing module selects a first three-dimensional projection model with an elliptical arc line;
(D) when the processing module judges that the distance between the obstacles is greater than the distance threshold, the processing module selects a second three-dimensional projection model with a circular arc line;
(E) the processing module maps the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model by using an inverse perspective projection conversion method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model; and
(F) and (E) converting the result of the step (E) into a corresponding first-view driving auxiliary image or a second-view driving auxiliary image by using a perspective projection conversion method.
In the step (B), the vehicle information includes a vehicle speed of the vehicle, and the distance threshold is a product of the vehicle speed and a preset time.
In the step (C), the functional formula of the first three-dimensional projection model is as follows:
wherein, VlIs the length of the vehicle, H is the height of the virtual camera corresponding to the first three-dimensional projection model,V3d _ w is a preset distance for the width of the vehicle.
In the driving assistance image generation method of the present invention, in the step (D), the functional formula of the second three-dimensional projection model is as follows:
wherein, VlIs the length of the vehicle, H is the height of the virtual camera corresponding to the second three-dimensional projection model,V3d _ w is a preset distance for the width of the vehicle.
The step (F) of the driving assistance image generation method of the present invention includes the following substeps:
(F-1) the processing module calculating a downward angle associated with downward shooting of the virtual camera according to the obstacle distance of the obstacle detection result and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, wherein the obstacle detection result further includes an obstacle azimuth angle of the obstacle with respect to the vehicle; and
(F-2) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead shooting angle and the obstacle azimuth.
The driving assistance image generation method of the present invention, wherein the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and the driving assistance image generation method further comprises the following steps after the step (a):
(G) when the processing module determines that the obstacle does not exist, wherein the vehicle information includes a vehicle speed of the vehicle, the processing module determines whether the vehicle speed is greater than a speed threshold;
(H) when the processing module determines that the vehicle speed is less than or equal to the speed threshold value, the processing module selects the first three-dimensional projection model; and
(I) the processing module selects the second three-dimensional projection model when the processing module determines that the vehicle speed is greater than the speed threshold.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and the step (F) includes the following substeps:
(F-1) the vehicle information includes a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and the processing module determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters;
(F-2) calculating a steering azimuth corresponding to the direction lamp when the processing module determines that the direction lamp is turned on; and
(F-3) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, a preset nodding angle and the steering azimuth angle.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a lane line detection module, the lane line detection module is configured to generate and transmit a lane line detection result related to a lane on which the vehicle is driving to the processing module, and the step (F) includes the following substeps:
(F-1) the processing module determining whether the lane line detection result indicates at least one lane line;
(F-2) when the processing module determines that the lane line detection result indicates the at least one lane line, the processing module determines whether the lane is an intersection according to the at least one lane line;
(F-3) when the judgment result of the step (F-2) is positive, the processing module calculates the lane working angle corresponding to the intersection according to the at least one lane line; and
(F-4) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and after the step (F-2), the method further includes the following substeps:
(F-5) when the determination result of the step (F-2) is negative, the vehicle information includes a vehicle speed of the vehicle, and the processing module predicts a next driving position of the vehicle based on the at least one lane line and the vehicle speed;
(F-6) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and an altitude of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-7) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the nodding angle and the driving azimuth.
The invention relates to a driving auxiliary image generating method, wherein the processing module is also electrically connected with a sensing module, the sensing module is used for sensing and transmitting vehicle information related to the vehicle to the processing module, and the driving auxiliary image generating method is characterized in that: after the step (F-1), the following substeps are also included:
(F-8) when the processing module determines that the lane detection result does not indicate any lane, the vehicle information includes a vehicle speed of the vehicle and a corner signal of the vehicle, and the processing module predicts a next driving position of the vehicle according to the vehicle speed and the corner signal;
(F-9) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-10) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead angle and the driving azimuth angle.
The driving auxiliary image generation system is used for executing the driving auxiliary image generation method.
The invention has the beneficial effects that: different three-dimensional projection models are adopted by the processing module according to the barrier distance between the barrier and the vehicle or the vehicle speed, so that the function and the parameter of the three-dimensional projection model are dynamically adjusted according to the change of the driving situation, driving images with different views are provided corresponding to the situation respectively, and better use experience of a driver is provided. In addition, the next driving position of the vehicle can be predicted according to the vehicle information, and a pre-judged driving auxiliary image is provided.
Drawings
Other features and effects of the present invention will become apparent from the following detailed description of the embodiments with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating an embodiment of a driving assistance image generation system according to the present invention;
FIG. 2 is a flowchart illustrating an embodiment of a driving assistance image generation method according to the present invention;
FIG. 3 is a perspective view illustrating a first three-dimensional projection model;
FIG. 4 is a schematic front view illustrating a projection of the first three-dimensional projection model onto a first plane;
FIG. 5 is a schematic side view illustrating a projection of the first three-dimensional projection model onto a second plane;
FIG. 6 is a schematic top view illustrating a projection of the first three-dimensional projection model onto a third plane;
FIG. 7 is a perspective view illustrating a second three-dimensional projection model;
FIG. 8 is a schematic front view illustrating the projection of the second three-dimensional projection model onto the first plane;
FIG. 9 is a schematic side view illustrating a projection of the second three-dimensional projection model onto the second plane;
FIG. 10 is a schematic top view illustrating a projection of the second three-dimensional projection model onto the third plane;
FIG. 11 is a schematic diagram illustrating the relationship of different obstacle distances to the first and second three-dimensional projection models;
FIG. 12 is a flowchart illustrating a first process flow of a driving assistance image generation program according to the embodiment of the driving assistance image generation method of the present invention;
FIG. 13 is a flowchart illustrating a second process of generating the driving assistance image generation program according to the embodiment of the driving assistance image generation method of the present invention;
FIG. 14 is an experimental schematic diagram showing a first-view driving assistance image presented after applying the first three-dimensional projection model of the present invention; and
fig. 15 is an experimental schematic diagram showing a second-view driving assistance image displayed after the second three-dimensional projection model of the invention is applied.
Detailed Description
Referring to fig. 1, the embodiment of the driving assistance image generation system of the invention is applicable to a vehicle. The driving auxiliary image generating system comprises a shooting unit 11, an obstacle detecting module 12, a sensing module 13, a lane line detecting module 14 electrically connected with the shooting unit 11, and a processing module 15 electrically connected with the shooting unit 11, the obstacle detecting module 12, the sensing module 13 and the lane line detecting module 14.
The photographing unit 11 includes four photographing modules (not shown) respectively disposed on a front bumper, a rear bumper, a left rear mirror, and a right rear mirror of the vehicle, each photographing module being, for example, a fisheye camera, for photographing the surroundings of the vehicle to respectively generate and transmit four surrounding images related to the surroundings of the vehicle to the processing module 15.
The obstacle detection module 12 is configured to generate and transmit an obstacle detection result related to an obstacle corresponding to the vehicle to the processing module 15. The obstacle detection module 12 determines whether the obstacle corresponding to the vehicle exists by using an existing obstacle detection technology, and when it is determined that the obstacle exists, detects an obstacle distance between the obstacle and the vehicle and an obstacle azimuth angle of the obstacle relative to the vehicle, and generates the obstacle detection result indicating that the obstacle exists and including the obstacle distance and the obstacle azimuth angle; and when it is determined that the obstacle does not exist, generating the obstacle detection result indicating that the obstacle does not exist. In the present embodiment, the obstacle detection module 12 includes a radar detector (not shown) for detecting the obstacle distance between the obstacle and the vehicle and the obstacle azimuth angle of the obstacle relative to the vehicle; however, in other embodiments, the obstacle detection module 12 may also obtain the obstacle distance between the obstacle and the vehicle and the obstacle azimuth angle of the obstacle relative to the vehicle according to the surrounding image corresponding to the front environment of the vehicle captured by the capturing unit 11, but not limited thereto.
The sensing module 13 is configured to sense and transmit vehicle information related to the vehicle to the processing module 15, where the vehicle information includes a vehicle speed of the vehicle, a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and a turn angle signal of the vehicle. Since the features of the present invention are not characteristic of the manner in which the sensing module 13 obtains the vehicle information, which is known to those skilled in the art, their details are omitted here for the sake of brevity.
The lane line detection module 14 is configured to generate and transmit a lane line detection result related to a lane where the vehicle is traveling to the processing module 15 according to the surrounding image corresponding to the environment in front of the vehicle captured by the capturing unit 11. The lane line detection module 14 determines whether at least one lane line corresponding to the lane exists in the surrounding image corresponding to the environment in front of the vehicle by using an existing lane line identification technology, and generates the lane line detection result indicating that the at least one lane line exists and including the identified at least one lane line when the at least one lane line is determined to exist; and when it is determined that the at least one lane line does not exist, generating the lane line detection result indicating that the at least one lane line does not exist.
The following describes the operation details of each component in the driving assistance image generation system in conjunction with the embodiment of the driving assistance image generation method of the present invention, which sequentially includes a projection model determination programming and a driving assistance image generation programming.
Referring to fig. 1 and 2, the driving assistance image generation system implements the projection model determination programming and the driving assistance image generation programming of the driving assistance image generation method of the present invention, and includes the following steps. The projection model decision programming specifies how to select different three-dimensional projection models in response to different driving situations, and the driving assistance image generation programming specifies how to obtain different views of driving assistance images.
In step 201, the obstacle detection module 12 continuously generates and transmits the obstacle detection result related to the obstacle corresponding to the vehicle to the processing module 15.
In step 202, the sensing module 13 continuously generates and transmits the vehicle information related to the vehicle to the processing module 15.
In step 203, the lane marking detection module 14 continuously generates and transmits the lane marking detection result related to the lane on which the vehicle is traveling to the processing module 15.
In step 204, the processing module 15 determines whether the received obstacle detection result indicates that the obstacle exists. When the processing module 15 determines that the obstacle detection result indicates that the obstacle exists, the process proceeds to step 205; when the processing module 15 determines that the obstacle detection result indicates that the obstacle does not exist, the process proceeds to step 208.
In step 205, the processing module 15 determines whether the obstacle distance in the obstacle detection result is greater than a distance threshold. When the processing module 15 determines that the distance between the obstacles is less than or equal to the distance threshold, the process proceeds to step 206; when the processing module 15 determines that the obstacle distance is greater than the distance threshold, the process proceeds to step 207. It is worth mentioning that the distance threshold is a product of the vehicle speed of the vehicle and a predetermined time, which may be a time to last point to brake (i.e. 2.5 seconds) when a last brake point is reached. The processing module 15 first obtains the distance threshold according to the vehicle speed sensed by the sensing module 13 and the preset time, and then determines whether the obstacle distance is greater than the distance threshold.
In step 206, the processing module 15 selects a first three-dimensional projection model (see fig. 3) having an elliptical arc, the projection of the first three-dimensional projection model on a first plane composed of an X-axis and a Z-axis is shown in fig. 4, the projection of the first three-dimensional projection model on a second plane composed of a Y-axis and the Z-axis is shown in fig. 5, and the projection of the first three-dimensional projection model on a third plane composed of the X-axis and the Y-axis is shown in fig. 6. A curve 501 (see fig. 4) projected on the first plane by the curved surface of the first three-dimensional projection model and a curve 502 (see fig. 5) projected on the second plane are both elliptical arc lines. Wherein the function of the first three-dimensional projection model can be expressed as the following formula (1).
Wherein, VlIs the length of the vehicle, H is the height of a virtual camera 701 corresponding to the first three-dimensional projection model,V3d _ w is a predetermined distance for the width of the vehicle. In this embodiment, the preset distance d _ w is set to 2 meters, however, the value of d _ w may be adjusted according to the use requirement, and is not limited thereto, the larger the value of d _ w, the flatter the elliptical arc projected by the curved surface of the first three-dimensional projection model on the first plane and the elliptical arc projected on the second plane are; the smaller the value of d _ w is, the closer the elliptical arc projected by the curved surface of the first three-dimensional projection model on the first plane and the elliptical arc projected on the second plane are to the arc. When the value of d _ w is set to 0, the curve projected by the curved surface of the first three-dimensional projection model on the first plane and the curve projected on the second plane are arc lines.
In step 207, the processing module 15 selects a second three-dimensional projection model (see fig. 7) having a circular arc, the projection of the second three-dimensional projection model on the first plane is shown in fig. 8, the projection of the second three-dimensional projection model on the second plane is shown in fig. 9, and the projection of the second three-dimensional projection model on the third plane is shown in fig. 10. The curve 503 (see fig. 8) projected on the first plane by the curved surface of the second three-dimensional projection model and the curve 504 (see fig. 9) projected on the second plane are both arc lines. Wherein the function of the second three-dimensional projection model can be expressed as the following formula (2).
Wherein H' corresponds to the height of a virtual camera 701 of the second three-dimensional projection model,in this embodiment, the height of the virtual camera 701 of the first three-dimensional projection model is the same as the height of the virtual camera 701 of the second three-dimensional projection model, that is, H ═ H. Similarly, the value of d _ w in the second three-dimensional projection model can also be adjusted according to the usage requirement, but not limited thereto, and the larger the value of d _ w, the closer the arc line projected by the curved surface of the second three-dimensional projection model on the first plane and the arc line projected on the second plane are to a straight line. When the value of d _ w approaches infinity, the second three-dimensional projection model is substantially cylindrical.
In step 208, the processing module 15 determines whether the vehicle speed in the received vehicle information is greater than a speed threshold. When the processing module 15 determines that the vehicle speed is not greater than the speed threshold, flow proceeds to step 209; when the processing module 15 determines that the vehicle speed is greater than the speed threshold, flow proceeds to step 210.
In step 209, the processing module 15 selects the first three-dimensional projection model (see fig. 3).
In step 210, the processing module 15 selects the second three-dimensional projection model (see fig. 7).
In step 211, the processing module 15 maps the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model by an inverse perspective projection transformation method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model.
When the distance between the obstacles is less than or equal to the distance threshold, the first three-dimensional projection model is selected to have an advantage that the first-view driving assistance image (see fig. 14) obtained according to the first three-dimensional projection model has a larger near-view range; when the obstacle distance is greater than the distance threshold, the second three-dimensional projection model is selected to have an advantage in that the second-view driving assistance image (see fig. 15) obtained from the second three-dimensional projection model has an effect of enlarging a distant object. If the obstacle does not exist, because the first visual field auxiliary driving image obtained based on the first three-dimensional projection model has a larger near visual range, when the vehicle speed is low, better visual experience can be realized by adopting the first three-dimensional projection model; because the second vision driving auxiliary image obtained based on the second three-dimensional projection model has the effect of amplifying the distant object, when the vehicle speed is high, better visual experience can be achieved by adopting the second three-dimensional projection model. Therefore, as can be seen from fig. 11, 14 and 15, the imaging using the first three-dimensional projection model (see fig. 11 at 102) has a larger near visual range, and the imaging using the second three-dimensional projection model (see fig. 11 at 101) has an effect of magnifying the distant object, so that switching between different three-dimensional projection models according to different distances of obstacles provides a better visual experience for the driver.
In step 212, the processing module 15 utilizes a perspective projection transformation method to transform the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model into a corresponding first-view driving auxiliary image or a corresponding second-view driving auxiliary image.
The details of the driving assistance image generation programming will be described below with reference to the attached drawings, where the driving assistance image generation programming may project the generated three-dimensional projection model into driving assistance images with different viewing angles according to different driving situations, where the driving assistance image generation programming may be further subdivided into a first generation flow and a second generation flow according to whether the obstacle detection result indicates the existence of the obstacle.
Referring to fig. 1 and 12, when the obstacle detection result indicates that the obstacle exists, the driving assistance image generation system obtains driving assistance images with different visual fields by using the first generation process, and includes the following steps.
In step 301, the processing module 15 calculates a first angle related to the virtual camera 701 photographing in a downward direction according to the obstacle distance of the obstacle detection result and the height of the virtual camera 701 (see fig. 3 and 7).
In step 302, the processing module 15 utilizes perspective projection conversion to convert a corresponding two-dimensional first-view driving auxiliary image or a corresponding two-dimensional second-view driving auxiliary image according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model, the first overhead view angle and the obstacle azimuth angle.
It is worth mentioning that the operation formula of the perspective projection transformation is as the following formula (3).
Wherein (x);,y;,z;) (u, v) is (x) the point of the first three-dimensional projection model or the second three-dimensional projection model to be projected to the first-view driving auxiliary image or the second-view driving auxiliary image;,y;,z;) The two-dimensional point after the projection is carried out,θx=0,θyis the first tilt angle thetazAs the azimuth angle of the obstacle,(xM,yM,zM) As the position of the virtual camera 701, fC、fDIndicating that the virtual camera 701 corresponds to twoFocal length of U, V axis of dimension (u, v), cC、cDAnd representing the image center point of the first view driving auxiliary image or the second view driving auxiliary image.
Referring to fig. 1 and 13, when the obstacle detection result indicates that the obstacle does not exist, the driving assistance image generation system obtains driving assistance images with different visual fields by using the second generation process, and includes the following steps.
In step 304, the processing module 15 determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters of the vehicle information. When the processing module 15 determines that the turn signal is turned on, the process proceeds to step 305; when the processing module 15 determines that none of the turn signals are turned on, the process proceeds to step 307.
In step 305, the processing module 15 calculates a turning azimuth corresponding to the turned-on turn signal according to the turn signal indicated by the turn signal parameter group.
In step 306, the processing module 15 converts the corresponding two-dimensional driving auxiliary image with the first view or the driving auxiliary image with the second view by using the perspective projection conversion according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model, a preset second overhead view angle and the steering azimuth angle. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and thetax=0,θyAt the second tilt angle θzIs the steering azimuth.
In step 307, the processing module 15 determines whether the lane line detection result indicates at least one lane line. When the processing module 15 determines that the lane line detection result indicates the at least one lane line, the process proceeds to step 308; when the processing module 15 determines that the lane detection result does not indicate any lane, the process proceeds to step 314.
In step 308, the processing module 15 determines whether the lane is an intersection according to the at least one lane line. When the processing module 15 determines that the lane is the intersection, the process proceeds to step 309; when the processing module 15 determines that the lane is not the intersection, the flow proceeds to step 311.
In step 309, the processing module 15 calculates an elevation angle (Alley View) corresponding to the intersection by using the existing elevation angle acquisition method according to the at least one lane line of the lane.
In step 310, the processing module 15 converts the first-view driving auxiliary image or the second-view driving auxiliary image into a corresponding two-dimensional driving auxiliary image by using the perspective projection conversion according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
In step 311, the processing module 15 predicts a next driving position of the vehicle according to the vehicle speed of the vehicle information and the at least one lane line.
In step 312, the processing module 15 calculates a third angle of pitch associated with the virtual camera 701 according to the next driving position and the height of the virtual camera 701 (see fig. 3 and 7), and calculates a first driving azimuth of the next driving position relative to the vehicle according to the next driving position.
In step 313, the processing module 15 converts the perspective projection into the corresponding two-dimensional driving auxiliary image with the first view or the second view according to the result of mapping the surrounding image onto the first three-dimensional projection model or the second three-dimensional projection model, the third overhead angle and the first driving azimuth angle. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and thetax=0,θyIs the third angle of tilt, thetazThe first azimuth of travel.
In step 314, the processing module 15 predicts a next driving position of the vehicle according to the vehicle speed and the steering angle signal. It is worth mentioning that the present embodiment uses, for example, a Kalman Filter (Kalman Filter) to track and predict the next driving position of the vehicle.
In step 315, the processing module 15 calculates a fourth angle of incidence associated with the virtual camera 701 according to the next driving position and the height of the virtual camera 701, and calculates a second driving azimuth of the next driving position relative to the vehicle according to the next driving position.
In step 316, the processing module 15 converts the perspective projection into the first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model, the fourth overhead angle and the second driving azimuth angle. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and thetax=0,θyIs the fourth angle of tilt, θzThe second driving azimuth.
The driving assistance image generation program of the present embodiment determines the viewing angle of the virtual camera 701 according to different driving situations when generating the driving assistance image, and when an obstacle exists, converts the corresponding viewing field driving assistance image according to at least the obstacle azimuth, thereby reminding the driver of paying attention to the obstacle so as to avoid collision, and when the driver has a turning behavior, converts the corresponding viewing field driving assistance image according to at least the turning azimuth, thereby assisting the driver to pay attention to the road condition when turning. In addition, the processing module 15 can predict the next driving position of the vehicle according to the driving track or the vehicle body signal, and set the observation angle as the next driving position in advance, so that the driving can know the road condition in advance and respond to the road condition in advance. However, in other embodiments of the present invention, the priority of steps 304 and 307 can be adjusted according to the driving requirement, and is not limited thereto.
In summary, in the driving assistance image generating method of the present invention, the processing module 15 selects different three-dimensional projection models according to the position of the obstacle and the vehicle speed, so that the image is more truly presented according to the different driving situations, and in addition, the processing module 15 determines the viewing angle of the virtual camera 701 according to the different driving situations, so as to generate the driving assistance image more conforming to the driving situations, and the processing module 15 can also predict the next driving position of the vehicle, and set the viewing angle as the next driving position in advance, so as to allow the driving to know the road condition in advance, and respond, thereby providing the better use experience for the driver, and thus the purpose of the present invention can be achieved.
The above description is only an example of the present invention, and the scope of the present invention should not be limited thereby, and the invention is still within the scope of the present invention by simple equivalent changes and modifications made according to the claims and the contents of the specification.
Claims (11)
1. A driving auxiliary image generating method is applicable to a vehicle and is implemented by a processing module, the processing module is electrically connected with a shooting unit and an obstacle detecting module, the shooting unit is used for generating and transmitting at least one peripheral image related to the peripheral environment of the vehicle to the processing module, the obstacle detecting module is used for generating and transmitting an obstacle detecting result related to an obstacle corresponding to the vehicle to the processing module, and the driving auxiliary image generating method is characterized by comprising the following steps:
(A) the processing module judges whether the obstacle detection result indicates that the obstacle exists or not;
(B) when the processing module determines that the obstacle exists, wherein the obstacle detection result comprises an obstacle distance between the obstacle and the vehicle, the processing module determines whether the obstacle distance is greater than a distance threshold value;
(C) when the processing module judges that the distance between the obstacles is less than or equal to the distance threshold, the processing module selects a first three-dimensional projection model with an elliptical arc line;
(D) when the processing module judges that the distance between the obstacles is greater than the distance threshold, the processing module selects a second three-dimensional projection model with a circular arc line;
(E) the processing module maps the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model by using an inverse perspective projection conversion method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model; and
(F) and (E) converting the result of the step (E) into a corresponding first-view driving auxiliary image or a second-view driving auxiliary image by using a perspective projection conversion method.
2. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: in the step (B), the vehicle information includes a vehicle speed of the vehicle, and the distance threshold is a product of the vehicle speed and a preset time.
3. A driving assistance image generation method according to claim 1, wherein: in the step (C), the functional expression of the first three-dimensional projection model is as follows:
4. A driving assistance image generation method according to claim 1, wherein: in the step (D), the functional expression of the second three-dimensional projection model is as follows:
5. A driving assistance image generation method according to claim 1, wherein: the step (F) comprises the substeps of:
(F-1) the processing module calculating a downward angle associated with downward shooting of the virtual camera according to the obstacle distance of the obstacle detection result and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, wherein the obstacle detection result further includes an obstacle azimuth angle of the obstacle with respect to the vehicle; and
(F-2) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead shooting angle and the obstacle azimuth.
6. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: the driving auxiliary image generation method further comprises the following steps after the step (A):
(G) when the processing module determines that the obstacle does not exist, wherein the vehicle information includes a vehicle speed of the vehicle, the processing module determines whether the vehicle speed is greater than a speed threshold;
(H) when the processing module determines that the vehicle speed is less than or equal to the speed threshold value, the processing module selects the first three-dimensional projection model; and
(I) the processing module selects the second three-dimensional projection model when the processing module determines that the vehicle speed is greater than the speed threshold.
7. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: the step (F) comprises the substeps of:
(F-1) the vehicle information includes a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and the processing module determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters;
(F-2) calculating a steering azimuth corresponding to the direction lamp when the processing module determines that the direction lamp is turned on; and
(F-3) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, a preset nodding angle and the steering azimuth angle.
8. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a lane detection module, the lane detection module is configured to generate and transmit a lane detection result related to a lane on which the vehicle is driving to the processing module, and the processing module is further configured to: the step (F) comprises the substeps of:
(F-1) the processing module determining whether the lane line detection result indicates at least one lane line;
(F-2) when the processing module determines that the lane line detection result indicates the at least one lane line, the processing module determines whether the lane is an intersection according to the at least one lane line;
(F-3) when the judgment result of the step (F-2) is positive, the processing module calculates the lane working angle corresponding to the intersection according to the at least one lane line; and
(F-4) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
9. A driving assistance image generation method according to claim 8, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: after the step (F-2), the following substeps are also included:
(F-5) when the determination result of the step (F-2) is negative, the vehicle information includes a vehicle speed of the vehicle, and the processing module predicts a next driving position of the vehicle based on the at least one lane line and the vehicle speed;
(F-6) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and an altitude of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-7) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the nodding angle and the driving azimuth.
10. A driving assistance image generation method according to claim 8, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: after the step (F-1), the following substeps are also included:
(F-8) when the processing module determines that the lane detection result does not indicate any lane, the vehicle information includes a vehicle speed of the vehicle and a corner signal of the vehicle, and the processing module predicts a next driving position of the vehicle according to the vehicle speed and the corner signal;
(F-9) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-10) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead angle and the driving azimuth angle.
11. A driving auxiliary image generation system is characterized in that: the driving assistance image generation system is used for executing the driving assistance image generation method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910619488.9A CN112208438B (en) | 2019-07-10 | 2019-07-10 | Driving auxiliary image generation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910619488.9A CN112208438B (en) | 2019-07-10 | 2019-07-10 | Driving auxiliary image generation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112208438A true CN112208438A (en) | 2021-01-12 |
CN112208438B CN112208438B (en) | 2022-07-29 |
Family
ID=74047399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910619488.9A Active CN112208438B (en) | 2019-07-10 | 2019-07-10 | Driving auxiliary image generation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112208438B (en) |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1394761A2 (en) * | 2002-08-28 | 2004-03-03 | Kabushiki Kaisha Toshiba | Obstacle detection device and method therefor |
US7307655B1 (en) * | 1998-07-31 | 2007-12-11 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for displaying a synthesized image viewed from a virtual point of view |
JP2008230476A (en) * | 2007-03-22 | 2008-10-02 | Denso Corp | Vehicle outside image taking display system and image display control device |
TW201103787A (en) * | 2009-07-31 | 2011-02-01 | Automotive Res & Testing Ct | Obstacle determination system and method utilizing bird's-eye images |
CN102695037A (en) * | 2011-03-25 | 2012-09-26 | 无锡维森智能传感技术有限公司 | Method for switching and expression of vehicle-mounted multi-view camera picture |
CN102783143A (en) * | 2010-03-10 | 2012-11-14 | 歌乐牌株式会社 | Vehicle surroundings monitoring device |
JP2012239157A (en) * | 2011-05-10 | 2012-12-06 | Harman Becker Automotive Systems Gmbh | Surround view camera automatic calibration with extrinsic parameters only |
US20130010118A1 (en) * | 2010-03-26 | 2013-01-10 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
EP2554434A1 (en) * | 2011-08-05 | 2013-02-06 | Harman Becker Automotive Systems GmbH | Vehicle surround view system |
KR20130053605A (en) * | 2011-11-15 | 2013-05-24 | 현대자동차주식회사 | Apparatus and method for displaying around view of vehicle |
TW201416274A (en) * | 2012-10-26 | 2014-05-01 | Automotive Res & Testing Ct | Instinct energy-saving driving auxiliary method and instinct energy-saving driving auxiliary system |
CN103885573A (en) * | 2012-12-19 | 2014-06-25 | 财团法人车辆研究测试中心 | Automatic correction method for vehicle display system and system thereof |
TW201601952A (en) * | 2014-07-03 | 2016-01-16 | Univ Shu Te | 3D panoramic image system using distance parameter to calibrate correctness of image |
CN205039930U (en) * | 2015-10-08 | 2016-02-17 | 华创车电技术中心股份有限公司 | Three -dimensional driving image reminding device |
US20160088260A1 (en) * | 2014-09-18 | 2016-03-24 | Fujitsu Ten Limited | Image processing apparatus |
JP2016085043A (en) * | 2014-10-22 | 2016-05-19 | 株式会社日本自動車部品総合研究所 | Obstacle detection device for vehicle |
US20160221503A1 (en) * | 2013-10-02 | 2016-08-04 | Conti Temic Microelectronic Gmbh | Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system |
JP2016189576A (en) * | 2015-03-30 | 2016-11-04 | アイシン精機株式会社 | Image display control device |
CN106101635A (en) * | 2016-05-05 | 2016-11-09 | 威盛电子股份有限公司 | Vehicle surrounding image processing method and device |
CN107054223A (en) * | 2016-12-28 | 2017-08-18 | 重庆长安汽车股份有限公司 | It is a kind of to be shown based on the driving blind area that panorama is shown and caution system and method |
CN108765496A (en) * | 2018-05-24 | 2018-11-06 | 河海大学常州校区 | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method |
US20190141310A1 (en) * | 2018-12-28 | 2019-05-09 | Intel Corporation | Real-time, three-dimensional vehicle display |
-
2019
- 2019-07-10 CN CN201910619488.9A patent/CN112208438B/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7307655B1 (en) * | 1998-07-31 | 2007-12-11 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for displaying a synthesized image viewed from a virtual point of view |
EP1394761A2 (en) * | 2002-08-28 | 2004-03-03 | Kabushiki Kaisha Toshiba | Obstacle detection device and method therefor |
JP2008230476A (en) * | 2007-03-22 | 2008-10-02 | Denso Corp | Vehicle outside image taking display system and image display control device |
TW201103787A (en) * | 2009-07-31 | 2011-02-01 | Automotive Res & Testing Ct | Obstacle determination system and method utilizing bird's-eye images |
CN102783143A (en) * | 2010-03-10 | 2012-11-14 | 歌乐牌株式会社 | Vehicle surroundings monitoring device |
US20130010118A1 (en) * | 2010-03-26 | 2013-01-10 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
CN102695037A (en) * | 2011-03-25 | 2012-09-26 | 无锡维森智能传感技术有限公司 | Method for switching and expression of vehicle-mounted multi-view camera picture |
CA2767273C (en) * | 2011-05-10 | 2019-05-21 | Harman Becker Automotive Systems Gmbh | The surroundview system camera automatic calibration-only extrinsic parameters |
JP2012239157A (en) * | 2011-05-10 | 2012-12-06 | Harman Becker Automotive Systems Gmbh | Surround view camera automatic calibration with extrinsic parameters only |
EP2554434A1 (en) * | 2011-08-05 | 2013-02-06 | Harman Becker Automotive Systems GmbH | Vehicle surround view system |
KR20130053605A (en) * | 2011-11-15 | 2013-05-24 | 현대자동차주식회사 | Apparatus and method for displaying around view of vehicle |
TW201416274A (en) * | 2012-10-26 | 2014-05-01 | Automotive Res & Testing Ct | Instinct energy-saving driving auxiliary method and instinct energy-saving driving auxiliary system |
CN103885573A (en) * | 2012-12-19 | 2014-06-25 | 财团法人车辆研究测试中心 | Automatic correction method for vehicle display system and system thereof |
US20160221503A1 (en) * | 2013-10-02 | 2016-08-04 | Conti Temic Microelectronic Gmbh | Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system |
TW201601952A (en) * | 2014-07-03 | 2016-01-16 | Univ Shu Te | 3D panoramic image system using distance parameter to calibrate correctness of image |
US20160088260A1 (en) * | 2014-09-18 | 2016-03-24 | Fujitsu Ten Limited | Image processing apparatus |
JP2016085043A (en) * | 2014-10-22 | 2016-05-19 | 株式会社日本自動車部品総合研究所 | Obstacle detection device for vehicle |
JP2016189576A (en) * | 2015-03-30 | 2016-11-04 | アイシン精機株式会社 | Image display control device |
CN205039930U (en) * | 2015-10-08 | 2016-02-17 | 华创车电技术中心股份有限公司 | Three -dimensional driving image reminding device |
CN106101635A (en) * | 2016-05-05 | 2016-11-09 | 威盛电子股份有限公司 | Vehicle surrounding image processing method and device |
CN107054223A (en) * | 2016-12-28 | 2017-08-18 | 重庆长安汽车股份有限公司 | It is a kind of to be shown based on the driving blind area that panorama is shown and caution system and method |
CN108765496A (en) * | 2018-05-24 | 2018-11-06 | 河海大学常州校区 | A kind of multiple views automobile looks around DAS (Driver Assistant System) and method |
US20190141310A1 (en) * | 2018-12-28 | 2019-05-09 | Intel Corporation | Real-time, three-dimensional vehicle display |
Non-Patent Citations (1)
Title |
---|
曹文君: "全景视觉环境避障测距方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112208438B (en) | 2022-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6675448B2 (en) | Vehicle position detecting method and device | |
CN108496178B (en) | System and method for estimating future path | |
JP6819681B2 (en) | Imaging control devices and methods, and vehicles | |
JP6819680B2 (en) | Imaging control devices and methods, and vehicles | |
US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
CN108638999B (en) | Anti-collision early warning system and method based on 360-degree look-around input | |
KR100414708B1 (en) | Picture composing apparatus and method | |
WO2019192359A1 (en) | Vehicle panoramic video display system and method, and vehicle controller | |
EP2071491B1 (en) | Stereo camera device | |
JP6257989B2 (en) | Driving assistance device | |
CN103600707B (en) | A kind of parking position detection device and method of Intelligent parking system | |
CN104442567B (en) | Object Highlighting And Sensing In Vehicle Image Display Systems | |
JP4425495B2 (en) | Outside monitoring device | |
JP5455124B2 (en) | Camera posture parameter estimation device | |
WO2019192145A1 (en) | Method and apparatus for adjusting field of view of panoramic image, storage medium, and electronic device | |
CN104802710B (en) | A kind of intelligent automobile reversing aid system and householder method | |
US20110169957A1 (en) | Vehicle Image Processing Method | |
JP6014433B2 (en) | Image processing apparatus, image processing method, and image processing system | |
JP2009060499A (en) | Driving support system, and combination vehicle | |
CN108944668B (en) | Auxiliary driving early warning method based on vehicle-mounted 360-degree look-around input | |
JP2000128031A (en) | Drive recorder, safety drive support system, and anti- theft system | |
CN103204104B (en) | Monitored control system and method are driven in a kind of full visual angle of vehicle | |
CN107229906A (en) | A kind of automobile overtaking's method for early warning based on units of variance model algorithm | |
JP2004240480A (en) | Operation support device | |
JP4848644B2 (en) | Obstacle recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |