CN112208438B - Driving auxiliary image generation method and system - Google Patents

Driving auxiliary image generation method and system Download PDF

Info

Publication number
CN112208438B
CN112208438B CN201910619488.9A CN201910619488A CN112208438B CN 112208438 B CN112208438 B CN 112208438B CN 201910619488 A CN201910619488 A CN 201910619488A CN 112208438 B CN112208438 B CN 112208438B
Authority
CN
China
Prior art keywords
processing module
vehicle
dimensional projection
projection model
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910619488.9A
Other languages
Chinese (zh)
Other versions
CN112208438A (en
Inventor
陈育菘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiwan Zhonghua Automobile Industry Co ltd
Original Assignee
Taiwan Zhonghua Automobile Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiwan Zhonghua Automobile Industry Co ltd filed Critical Taiwan Zhonghua Automobile Industry Co ltd
Priority to CN201910619488.9A priority Critical patent/CN112208438B/en
Publication of CN112208438A publication Critical patent/CN112208438A/en
Application granted granted Critical
Publication of CN112208438B publication Critical patent/CN112208438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8073Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for vehicle security, e.g. parked vehicle surveillance, burglar detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Abstract

A driving auxiliary image generation method comprises the following steps: (A) determining whether the obstacle detection result indicates that an obstacle exists; (B) judging whether the distance between the obstacles in the obstacle detection result is greater than a distance threshold value; (C) when the obstacle distance is judged to be smaller than or equal to the distance threshold value, selecting a first three-dimensional projection model; (D) selecting a second three-dimensional projection model when it is determined that the obstacle distance is greater than the distance threshold; (E) mapping the surrounding image to the selected first three-dimensional projection model or the second three-dimensional projection model; and (F) converting the result of the step (E) into a corresponding driving auxiliary image with a first view or a second view.

Description

Driving auxiliary image generation method and system
Technical Field
The present invention relates to an image generating method, and more particularly, to a driving assistance image generating method for a vehicle.
Background
An automobile surround-view display (around-view) system is one of Advanced Driving Assistance System (ADAS) technologies, and can display a bird's-eye view 360-degree panoramic image of a vehicle and its surroundings for a driver in real time to ensure driving safety in parking or other low-speed driving situations. In the conventional automobile panoramic display system, images of the surrounding environment of a vehicle are obtained by a plurality of wide-angle cameras respectively arranged at the front, the rear and the two sides of the vehicle, then the images obtained by the wide-angle cameras are converted into a three-dimensional projection model by a reverse perspective projection conversion according to the fixed three-dimensional projection model, and then a virtual camera performs a perspective projection conversion on the three-dimensional projection model at different viewing angles to obtain two-dimensional images at different observation viewing angles.
However, the shape and size of the three-dimensional projection model used in the conventional car surround view display system are fixed and cannot be dynamically adjusted according to the change of the driving situation, so that the use experience cannot be improved.
Disclosure of Invention
The invention aims to provide a driving auxiliary image generation method and a driving auxiliary image generation system, which can dynamically adjust the function and the parameters of a three-dimensional projection model according to the change of a driving situation so as to provide better use experience.
The driving auxiliary image generating method is suitable for a vehicle and is implemented by a processing module, the processing module is electrically connected with a shooting unit and an obstacle detecting module, the shooting unit is used for generating and transmitting at least one peripheral image related to the peripheral environment of the vehicle to the processing module, the obstacle detecting module is used for generating and transmitting an obstacle detecting result related to an obstacle corresponding to the vehicle to the processing module, and the driving auxiliary image generating method comprises the following steps:
(A) the processing module judges whether the obstacle detection result indicates that the obstacle exists or not;
(B) when the processing module determines that the obstacle exists, wherein the obstacle detection result comprises an obstacle distance between the obstacle and the vehicle, the processing module determines whether the obstacle distance is greater than a distance threshold value;
(C) When the processing module judges that the distance between the obstacles is less than or equal to the distance threshold, the processing module selects a first three-dimensional projection model with an elliptical arc line;
(D) when the processing module judges that the distance between the obstacles is greater than the distance threshold, the processing module selects a second three-dimensional projection model with a circular arc line;
(E) the processing module maps the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model by using an inverse perspective projection conversion method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model; and
(F) and (E) converting the result of the step (E) into a corresponding first-view driving auxiliary image or a second-view driving auxiliary image by using a perspective projection conversion method.
In the step (B), the vehicle information includes a vehicle speed of the vehicle, and the distance threshold is a product of the vehicle speed and a preset time.
In the step (C), the functional formula of the first three-dimensional projection model is as follows:
Figure BDA0002125059940000021
Wherein, V l Is the length of the vehicle, H is the height of the virtual camera corresponding to the first three-dimensional projection model,
Figure BDA0002125059940000022
V 3 d _ w is a preset distance for the width of the vehicle.
In the driving assistance image generation method of the present invention, in the step (D), the functional formula of the second three-dimensional projection model is as follows:
Figure BDA0002125059940000031
wherein, V l Is the length of the vehicle, H is the height of the virtual camera corresponding to the second three-dimensional projection model,
Figure BDA0002125059940000032
V 3 d _ w is a preset distance for the width of the vehicle.
The step (F) of the driving assistance image generation method of the present invention includes the following substeps:
(F-1) the processing module calculating a downward angle associated with downward shooting of the virtual camera according to the obstacle distance of the obstacle detection result and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, wherein the obstacle detection result further includes an obstacle azimuth angle of the obstacle with respect to the vehicle; and
(F-2) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead shooting angle and the obstacle azimuth.
The driving assistance image generation method of the present invention, wherein the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and the driving assistance image generation method further comprises the following steps after the step (a):
(G) when the processing module determines that the obstacle does not exist, wherein the vehicle information includes a vehicle speed of the vehicle, the processing module determines whether the vehicle speed is greater than a speed threshold;
(H) when the processing module determines that the vehicle speed is less than or equal to the speed threshold value, the processing module selects the first three-dimensional projection model; and
(I) the processing module selects the second three-dimensional projection model when the processing module determines that the vehicle speed is greater than the speed threshold.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and the step (F) includes the following substeps:
(F-1) the vehicle information includes a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and the processing module determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters;
(F-2) calculating a steering azimuth corresponding to the direction lamp when the processing module determines that the direction lamp is turned on; and
(F-3) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, a preset nodding angle and the steering azimuth angle.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a lane line detection module, the lane line detection module is configured to generate and transmit a lane line detection result related to a lane on which the vehicle is driving to the processing module, and the step (F) includes the following substeps:
(F-1) the processing module determining whether the lane line detection result indicates at least one lane line;
(F-2) when the processing module determines that the lane line detection result indicates the at least one lane line, the processing module determines whether the lane is an intersection according to the at least one lane line;
(F-3) when the judgment result of the step (F-2) is positive, the processing module calculates the lane working angle corresponding to the intersection according to the at least one lane line; and
(F-4) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
In the driving assistance image generating method of the present invention, the processing module is further electrically connected to a sensing module, the sensing module is configured to sense and transmit vehicle information related to the vehicle to the processing module, and after the step (F-2), the method further includes the following substeps:
(F-5) when the determination result of the step (F-2) is negative, the vehicle information includes a vehicle speed of the vehicle, and the processing module predicts a next driving position of the vehicle based on the at least one lane line and the vehicle speed;
(F-6) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and an altitude of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-7) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the nodding angle and the driving azimuth.
The invention relates to a driving auxiliary image generating method, wherein the processing module is also electrically connected with a sensing module, the sensing module is used for sensing and transmitting vehicle information related to the vehicle to the processing module, and the driving auxiliary image generating method is characterized in that: after the step (F-1), the following substeps are included:
(F-8) when the processing module determines that the lane detection result does not indicate any lane, the vehicle information includes a vehicle speed of the vehicle and a corner signal of the vehicle, and the processing module predicts a next driving position of the vehicle according to the vehicle speed and the corner signal;
(F-9) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-10) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead angle and the driving azimuth angle.
The driving assistance image generation system is used for executing the driving assistance image generation method.
The invention has the beneficial effects that: different three-dimensional projection models are adopted by the processing module according to the barrier distance between the barrier and the vehicle or the vehicle speed, so that the function and the parameter of the three-dimensional projection model are dynamically adjusted according to the change of the driving situation, driving images with different views are provided corresponding to the situation respectively, and better use experience of a driver is provided. In addition, the next driving position of the vehicle can be predicted according to the vehicle information, and a predicted driving auxiliary image is provided.
Drawings
Other features and effects of the present invention will become apparent from the following detailed description of the embodiments with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating an embodiment of a driving assistance image generation system according to the present invention;
FIG. 2 is a flowchart illustrating an embodiment of a driving assistance image generation method according to the present invention;
FIG. 3 is a perspective view illustrating a first three-dimensional projection model;
FIG. 4 is a schematic front view illustrating a projection of the first three-dimensional projection model onto a first plane;
FIG. 5 is a schematic side view illustrating a projection of the first three-dimensional projection model onto a second plane;
FIG. 6 is a schematic top view illustrating a projection of the first three-dimensional projection model onto a third plane;
FIG. 7 is a perspective view illustrating a second three-dimensional projection model;
FIG. 8 is a schematic front view illustrating the projection of the second three-dimensional projection model onto the first plane;
FIG. 9 is a schematic side view illustrating a projection of the second three-dimensional projection model onto the second plane;
FIG. 10 is a schematic top view illustrating a projection of the second three-dimensional projection model onto the third plane;
FIG. 11 is a schematic diagram illustrating the relationship of different obstacle distances to the first and second three-dimensional projection models;
FIG. 12 is a flowchart illustrating a first process flow of a driving assistance image generation program according to the embodiment of the driving assistance image generation method of the present invention;
FIG. 13 is a flowchart illustrating a second process of generating the driving assistance image generation program according to the embodiment of the driving assistance image generation method of the present invention;
FIG. 14 is an experimental schematic diagram showing a first-view driving assistance image presented after applying the first three-dimensional projection model of the present invention; and
fig. 15 is an experimental schematic diagram showing a second-view driving assistance image displayed after the second three-dimensional projection model of the invention is applied.
Detailed Description
Referring to fig. 1, the embodiment of the driving assistance image generation system of the invention is applicable to a vehicle. The driving auxiliary image generating system comprises a shooting unit 11, an obstacle detecting module 12, a sensing module 13, a lane line detecting module 14 electrically connected with the shooting unit 11, and a processing module 15 electrically connected with the shooting unit 11, the obstacle detecting module 12, the sensing module 13 and the lane line detecting module 14.
The photographing unit 11 includes four photographing modules (not shown) respectively disposed on a front bumper, a rear bumper, a left rear mirror, and a right rear mirror of the vehicle, each photographing module being, for example, a fisheye camera, for photographing the surroundings of the vehicle to respectively generate and transmit four surrounding images related to the surroundings of the vehicle to the processing module 15.
The obstacle detection module 12 is configured to generate and transmit an obstacle detection result related to an obstacle corresponding to the vehicle to the processing module 15. The obstacle detection module 12 determines whether the obstacle corresponding to the vehicle exists by using an existing obstacle detection technology, and when it is determined that the obstacle exists, detects an obstacle distance between the obstacle and the vehicle and an obstacle azimuth angle of the obstacle relative to the vehicle, and generates the obstacle detection result indicating that the obstacle exists and including the obstacle distance and the obstacle azimuth angle; and when it is determined that the obstacle does not exist, generating the obstacle detection result indicating that the obstacle does not exist. In the present embodiment, the obstacle detection module 12 includes a radar detector (not shown) for detecting the obstacle distance between the obstacle and the vehicle and the obstacle azimuth angle of the obstacle relative to the vehicle; however, in other embodiments, the obstacle detection module 12 may also obtain the obstacle distance between the obstacle and the vehicle and the obstacle azimuth angle of the obstacle relative to the vehicle according to the surrounding image corresponding to the front environment of the vehicle captured by the capturing unit 11, but not limited thereto.
The sensing module 13 is configured to sense and transmit vehicle information related to the vehicle to the processing module 15, where the vehicle information includes a vehicle speed of the vehicle, a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and a turn angle signal of the vehicle. Since the features of the present invention are not characteristic of the manner in which the sensing module 13 obtains the vehicle information, which is known to those skilled in the art, their details are omitted here for the sake of brevity.
The lane line detection module 14 is configured to generate and transmit a lane line detection result related to a lane where the vehicle is traveling to the processing module 15 according to the surrounding image corresponding to the environment in front of the vehicle captured by the capturing unit 11. The lane line detection module 14 determines whether at least one lane line corresponding to the lane exists in the surrounding image corresponding to the environment in front of the vehicle by using an existing lane line identification technology, and generates the lane line detection result indicating that the at least one lane line exists and including the identified at least one lane line when the at least one lane line is determined to exist; and when it is determined that the at least one lane line does not exist, generating the lane line detection result indicating that the at least one lane line does not exist.
The following describes the operation details of each component in the driving assistance image generation system in conjunction with the embodiment of the driving assistance image generation method of the present invention, which sequentially includes a projection model determination programming and a driving assistance image generation programming.
Referring to fig. 1 and 2, the driving assistance image generation system implements the projection model determination programming and the driving assistance image generation programming of the driving assistance image generation method of the present invention, and includes the following steps. The projection model decision programming specifies how to select different three-dimensional projection models in response to different driving situations, and the driving assistance image generation programming specifies how to obtain different views of driving assistance images.
In step 201, the obstacle detection module 12 continuously generates and transmits the obstacle detection result related to the obstacle corresponding to the vehicle to the processing module 15.
In step 202, the sensing module 13 continuously generates and transmits the vehicle information related to the vehicle to the processing module 15.
In step 203, the lane marking detection module 14 continuously generates and transmits the lane marking detection result related to the lane on which the vehicle is traveling to the processing module 15.
In step 204, the processing module 15 determines whether the received obstacle detection result indicates that the obstacle exists. When the processing module 15 determines that the obstacle detection result indicates that the obstacle exists, the process proceeds to step 205; when the processing module 15 determines that the obstacle detection result indicates that the obstacle does not exist, the process proceeds to step 208.
In step 205, the processing module 15 determines whether the obstacle distance in the obstacle detection result is greater than a distance threshold. When the processing module 15 determines that the obstacle distance is smaller than or equal to the distance threshold, the process proceeds to step 206; when the processing module 15 determines that the obstacle distance is greater than the distance threshold, the process proceeds to step 207. It is worth mentioning that the distance threshold is a product of the vehicle speed of the vehicle and a predetermined time, which may be a time to last point to brake (i.e. 2.5 seconds) when a last brake point is reached. The processing module 15 first obtains the distance threshold according to the vehicle speed sensed by the sensing module 13 and the preset time, and then determines whether the obstacle distance is greater than the distance threshold.
In step 206, the processing module 15 selects a first three-dimensional projection model (see fig. 3) having an elliptical arc, the projection of the first three-dimensional projection model on a first plane composed of an X-axis and a Z-axis is shown in fig. 4, the projection of the first three-dimensional projection model on a second plane composed of a Y-axis and the Z-axis is shown in fig. 5, and the projection of the first three-dimensional projection model on a third plane composed of the X-axis and the Y-axis is shown in fig. 6. A curve 501 (see fig. 4) projected on the first plane by the curved surface of the first three-dimensional projection model and a curve 502 (see fig. 5) projected on the second plane are both elliptical arc lines. Wherein the function of the first three-dimensional projection model can be expressed as the following formula (1).
Figure BDA0002125059940000091
Wherein, V l Is the length of the vehicle, H is the height of a virtual camera 701 corresponding to the first three-dimensional projection model,
Figure BDA0002125059940000092
V 3 d _ w is a predetermined distance for the width of the vehicle. In this embodiment, the preset distance d _ w is set to 2 meters, however, the value of d _ w may be adjusted according to the use requirement, and is not limited thereto, the larger the value of d _ w, the flatter the elliptical arc projected by the curved surface of the first three-dimensional projection model on the first plane and the elliptical arc projected on the second plane are; the smaller the value of d _ w is, the closer the elliptical arc projected by the curved surface of the first three-dimensional projection model on the first plane and the elliptical arc projected on the second plane are to the arc. When the value of d _ w is set to 0, the curve projected by the curved surface of the first three-dimensional projection model on the first plane and the curve projected on the second plane are arc lines.
In step 207, the processing module 15 selects a second three-dimensional projection model (see fig. 7) having a circular arc, the projection of the second three-dimensional projection model on the first plane is shown in fig. 8, the projection of the second three-dimensional projection model on the second plane is shown in fig. 9, and the projection of the second three-dimensional projection model on the third plane is shown in fig. 10. The curve 503 (see fig. 8) projected on the first plane by the curved surface of the second three-dimensional projection model and the curve 504 (see fig. 9) projected on the second plane are both arc lines. Wherein the function of the second three-dimensional projection model can be expressed as the following formula (2).
Figure BDA0002125059940000093
Wherein H' corresponds to the height of a virtual camera 701 of the second three-dimensional projection model,
Figure BDA0002125059940000094
in this embodiment, the height of the virtual camera 701 of the first three-dimensional projection model is the same as the height of the virtual camera 701 of the second three-dimensional projection model, that is, H ═ H. Similarly, the value of d _ w in the second three-dimensional projection model can be adjusted according to the use requirement, and notFor limitation, the larger the value of d _ w is, the closer the arc line projected by the curved surface of the second three-dimensional projection model on the first plane and the arc line projected on the second plane are to the straight line. When the value of d _ w approaches infinity, the second three-dimensional projection model is substantially cylindrical.
In step 208, the processing module 15 determines whether the vehicle speed in the received vehicle information is greater than a speed threshold. When the processing module 15 determines that the vehicle speed is not greater than the speed threshold, flow proceeds to step 209; when the processing module 15 determines that the vehicle speed is greater than the speed threshold, flow proceeds to step 210.
In step 209, the processing module 15 selects the first three-dimensional projection model (see fig. 3).
In step 210, the processing module 15 selects the second three-dimensional projection model (see fig. 7).
In step 211, the processing module 15 maps the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model by an inverse perspective projection transformation method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model.
When the distance between the obstacles is less than or equal to the distance threshold, the first three-dimensional projection model is selected to have an advantage that the first-view driving assistance image (see fig. 14) obtained according to the first three-dimensional projection model has a larger near-view range; when the obstacle distance is greater than the distance threshold, the second three-dimensional projection model is selected to have an advantage in that the second-view driving assistance image (see fig. 15) obtained from the second three-dimensional projection model has an effect of enlarging a distant object. If the obstacle does not exist, because the first visual field auxiliary driving image obtained based on the first three-dimensional projection model has a larger near visual range, when the vehicle speed is low, better visual experience can be realized by adopting the first three-dimensional projection model; because the second vision driving auxiliary image obtained based on the second three-dimensional projection model has the effect of amplifying the distant object, when the vehicle speed is high, better visual experience can be achieved by adopting the second three-dimensional projection model. Therefore, as can be seen from fig. 11, 14 and 15, the imaging using the first three-dimensional projection model (see fig. 11 at 102) has a larger near visual range, and the imaging using the second three-dimensional projection model (see fig. 11 at 101) has an effect of magnifying the distant object, so that switching between different three-dimensional projection models according to different distances of obstacles provides a better visual experience for the driver.
In step 212, the processing module 15 utilizes a perspective projection transformation method to transform the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model into a corresponding first-view driving assistance image or a second-view driving assistance image.
The details of the driving assistance image generation programming will be described below with reference to the attached drawings, where the driving assistance image generation programming may project the generated three-dimensional projection model into driving assistance images with different viewing angles according to different driving situations, where the driving assistance image generation programming may be further subdivided into a first generation flow and a second generation flow according to whether the obstacle detection result indicates the existence of the obstacle.
Referring to fig. 1 and 12, when the obstacle detection result indicates that the obstacle exists, the driving assistance image generation system obtains driving assistance images with different visual fields by using the first generation process, and includes the following steps.
In step 301, the processing module 15 calculates a first nodding angle associated with the nodding-down shooting of the virtual camera 701 according to the obstacle distance of the obstacle detection result and the height of the virtual camera 701 (see fig. 3 and 7).
In step 302, the processing module 15 utilizes perspective projection conversion to convert a corresponding two-dimensional first-view driving auxiliary image or a corresponding two-dimensional second-view driving auxiliary image according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model, the first overhead view angle and the obstacle azimuth angle.
It is worth mentioning that the operation formula of the perspective projection transformation is as the following formula (3).
Figure BDA0002125059940000111
Wherein (x) ,y ,z ) (u, v) is (x) the point of the first three-dimensional projection model or the second three-dimensional projection model to be projected to the first-view driving auxiliary image or the second-view driving auxiliary image ,y ,z ) The two-dimensional point after the projection is carried out,
Figure BDA0002125059940000112
θ x =0,θ y is the first tilt angle theta z As the azimuth angle of the obstacle,
Figure BDA0002125059940000113
(x M ,y M ,z M ) As the position of the virtual camera 701, f C 、f D Denotes the focal length, c, of the virtual camera 701 in the U, V axis corresponding to the two-dimensional point (u, v) C 、c D And representing the image center point of the first view driving auxiliary image or the second view driving auxiliary image.
Referring to fig. 1 and 13, when the obstacle detection result indicates that the obstacle does not exist, the driving assistance image generation system obtains driving assistance images with different visual fields by using the second generation process, and includes the following steps.
In step 304, the processing module 15 determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters of the vehicle information. When the processing module 15 determines that the turn signal is turned on, the process proceeds to step 305; when the processing module 15 determines that none of the turn signals are turned on, the process proceeds to step 307.
In step 305, the processing module 15 calculates a turning azimuth corresponding to the turned-on turn signal according to the turn signal indicated by the turn signal parameter group.
In step 306, the processing module 15 maps the ambient image to the first three-dimensional projection modelAnd converting the result of the type or the second three-dimensional projection model, a preset second overhead angle and the steering azimuth angle into a corresponding two-dimensional first-view driving auxiliary image or second-view driving auxiliary image by utilizing the perspective projection conversion. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and theta x =0,θ y At the second tilt angle θ z Is the steering azimuth.
In step 307, the processing module 15 determines whether the lane line detection result indicates at least one lane line. When the processing module 15 determines that the lane line detection result indicates the at least one lane line, the process proceeds to step 308; when the processing module 15 determines that the lane detection result does not indicate any lane, the process proceeds to step 314.
In step 308, the processing module 15 determines whether the lane is an intersection according to the at least one lane line. When the processing module 15 determines that the lane is the intersection, the process proceeds to step 309; when the processing module 15 determines that the lane is not the intersection, the flow proceeds to step 311.
In step 309, the processing module 15 calculates an elevation angle (Alley View) corresponding to the intersection by using the existing elevation angle acquisition method according to the at least one lane line of the lane.
In step 310, the processing module 15 converts the first-view driving auxiliary image or the second-view driving auxiliary image into a corresponding two-dimensional driving auxiliary image by using the perspective projection conversion according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
In step 311, the processing module 15 predicts a next driving position of the vehicle according to the vehicle speed of the vehicle information and the at least one lane line.
In step 312, the processing module 15 calculates a third angle of pitch associated with the virtual camera 701 according to the next driving position and the height of the virtual camera 701 (see fig. 3 and 7), and calculates a first driving azimuth of the next driving position relative to the vehicle according to the next driving position.
In step 313, the processing module 15 converts the perspective projection into the corresponding two-dimensional driving auxiliary image with the first view or the second view according to the result of mapping the surrounding image onto the first three-dimensional projection model or the second three-dimensional projection model, the third overhead angle and the first driving azimuth angle. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and theta x =0,θ y Is the third angle of tilt, theta z Is the first azimuth of travel.
In step 314, the processing module 15 predicts a next driving position of the vehicle according to the vehicle speed and the steering angle signal. It is worth mentioning that the present embodiment uses, for example, a Kalman Filter (Kalman Filter) to track and predict the next driving position of the vehicle.
In step 315, the processing module 15 calculates a fourth angle of incidence associated with the virtual camera 701 according to the next driving position and the height of the virtual camera 701, and calculates a second driving azimuth of the next driving position relative to the vehicle according to the next driving position.
In step 316, the processing module 15 converts the perspective projection into the first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the surrounding image to the first three-dimensional projection model or the second three-dimensional projection model, the fourth overhead angle and the second driving azimuth angle. Wherein the operation formula of the perspective projection conversion is the same as the formula (3), and theta x =0,θ y Is the fourth angle of tilt, θ z The second driving azimuth.
The driving assistance image generation program of the present embodiment determines the viewing angle of the virtual camera 701 according to different driving situations when generating the driving assistance image, and when an obstacle exists, converts the corresponding viewing field driving assistance image according to at least the obstacle azimuth, thereby reminding the driver of paying attention to the obstacle so as to avoid collision, and when the driver has a turning behavior, converts the corresponding viewing field driving assistance image according to at least the turning azimuth, thereby assisting the driver to pay attention to the road condition when turning. In addition, the processing module 15 can predict the next driving position of the vehicle according to the driving track or the vehicle body signal, and set the observation angle as the next driving position in advance, so that the driving can know the road condition in advance and respond to the road condition in advance. However, in other embodiments of the present invention, the priority of steps 304 and 307 can be adjusted according to the driving requirement, and is not limited thereto.
In summary, in the driving assistance image generating method of the present invention, the processing module 15 selects different three-dimensional projection models according to the position of the obstacle and the vehicle speed, so that the image is more truly presented according to the different driving situations, and in addition, the processing module 15 determines the viewing angle of the virtual camera 701 according to the different driving situations, so as to generate the driving assistance image more conforming to the driving situations, and the processing module 15 can also predict the next driving position of the vehicle, and set the viewing angle as the next driving position in advance, so as to allow the driving to know the road condition in advance, and respond, thereby providing the better use experience for the driver, and thus the purpose of the present invention can be achieved.
The above description is only an example of the present invention, and the scope of the present invention should not be limited thereby, and the invention is still within the scope of the present invention by simple equivalent changes and modifications made according to the claims and the contents of the specification.

Claims (11)

1. A driving auxiliary image generating method is applicable to a vehicle and is implemented by a processing module, the processing module is electrically connected with a shooting unit and an obstacle detecting module, the shooting unit is used for generating and transmitting at least one peripheral image related to the peripheral environment of the vehicle to the processing module, the obstacle detecting module is used for generating and transmitting an obstacle detecting result related to an obstacle corresponding to the vehicle to the processing module, and the driving auxiliary image generating method is characterized by comprising the following steps:
(A) the processing module judges whether the obstacle detection result indicates that the obstacle exists or not;
(B) when the processing module determines that the obstacle exists, wherein the obstacle detection result comprises an obstacle distance between the obstacle and the vehicle, the processing module determines whether the obstacle distance is greater than a distance threshold value;
(C) when the processing module judges that the distance between the obstacles is less than or equal to the distance threshold, the processing module selects a first three-dimensional projection model with an elliptical arc line;
(D) When the processing module judges that the distance between the obstacles is greater than the distance threshold, the processing module selects a second three-dimensional projection model with a circular arc line;
(E) the processing module maps the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model by using an inverse perspective projection conversion method according to the selected first three-dimensional projection model or the selected second three-dimensional projection model; and
(F) and (E) converting the result of the step (E) into a corresponding first-view driving auxiliary image or a second-view driving auxiliary image by using a perspective projection conversion method.
2. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: in the step (B), the vehicle information includes a vehicle speed of the vehicle, and the distance threshold is a product of the vehicle speed and a preset time.
3. A driving assistance image generation method according to claim 1, wherein: in the step (C), the functional expression of the first three-dimensional projection model is as follows:
Figure FDA0002125059930000011
wherein, V 1 Is the length of the vehicle, H is the height of the virtual camera corresponding to the first three-dimensional projection model,
Figure FDA0002125059930000021
V w D _ w is a preset distance for the width of the vehicle.
4. A driving assistance image generation method according to claim 1, wherein: in the step (D), the functional expression of the second three-dimensional projection model is as follows:
Figure FDA0002125059930000022
wherein, V 1 Is the length of the vehicle, H is the height of the virtual camera corresponding to the second three-dimensional projection model,
Figure FDA0002125059930000023
V w d _ w is a preset distance for the width of the vehicle.
5. A driving assistance image generation method according to claim 1, wherein: the step (F) comprises the substeps of:
(F-1) the processing module calculating a downward angle associated with downward shooting of the virtual camera according to the obstacle distance of the obstacle detection result and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, wherein the obstacle detection result further includes an obstacle azimuth angle of the obstacle with respect to the vehicle; and
(F-2) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead shooting angle and the obstacle azimuth.
6. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: the driving auxiliary image generation method further comprises the following steps after the step (A):
(G) when the processing module determines that the obstacle does not exist, wherein the vehicle information includes a vehicle speed of the vehicle, the processing module determines whether the vehicle speed is greater than a speed threshold;
(H) when the processing module determines that the vehicle speed is less than or equal to the speed threshold value, the processing module selects the first three-dimensional projection model; and
(I) the processing module selects the second three-dimensional projection model when the processing module determines that the vehicle speed is greater than the speed threshold.
7. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: the step (F) comprises the substeps of:
(F-1) the vehicle information includes a set of turn signal parameters indicating whether a plurality of turn signals of the vehicle are turned on, and the processing module determines whether one of the turn signals of the vehicle is turned on according to the set of turn signal parameters;
(F-2) calculating a steering azimuth corresponding to the direction lamp when the processing module determines that the direction lamp is turned on; and
(F-3) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, a preset nodding angle and the steering azimuth angle.
8. A driving assistance image generation method according to claim 1, wherein the processing module is further electrically connected to a lane detection module, the lane detection module is configured to generate and transmit a lane detection result related to a lane on which the vehicle is driving to the processing module, and the processing module is further configured to: the step (F) comprises the substeps of:
(F-1) the processing module determining whether the lane line detection result indicates at least one lane line;
(F-2) when the processing module determines that the lane line detection result indicates the at least one lane line, the processing module determines whether the lane is an intersection according to the at least one lane line;
(F-3) when the judgment result of the step (F-2) is positive, the processing module calculates the lane working angle corresponding to the intersection according to the at least one lane line; and
(F-4) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model and the lane viewing angle.
9. A driving assistance image generation method according to claim 8, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: after the step (F-2), the following substeps are included:
(F-5) when the determination result of the step (F-2) is negative, the vehicle information includes a vehicle speed of the vehicle, and the processing module predicts a next driving position of the vehicle based on the at least one lane line and the vehicle speed;
(F-6) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and an altitude of the virtual camera corresponding to the selected first three-dimensional projection model or the selected second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-7) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the nodding angle and the driving azimuth.
10. A driving assistance image generation method according to claim 8, wherein the processing module is further electrically connected to a sensing module for sensing and transmitting vehicle information related to the vehicle to the processing module, and the processing module comprises: after the step (F-1), the following substeps are also included:
(F-8) when the processing module determines that the lane detection result does not indicate any lane, the vehicle information includes a vehicle speed of the vehicle and a corner signal of the vehicle, and the processing module predicts a next driving position of the vehicle according to the vehicle speed and the corner signal;
(F-9) the processing module calculating a dive angle with respect to the virtual camera dive-to-shoot according to the next driving position and a height of the virtual camera corresponding to the selected first three-dimensional projection model or the second three-dimensional projection model, and calculating a driving azimuth of the next driving position with respect to the vehicle according to the next driving position; and
(F-10) the processing module obtains the corresponding first-view driving auxiliary image or the second-view driving auxiliary image according to the result of mapping the at least one peripheral image to the first three-dimensional projection model or the second three-dimensional projection model, the overhead angle and the driving azimuth angle.
11. A driving auxiliary image generation system is characterized in that: the driving assistance image generation system is used for executing the driving assistance image generation method according to any one of claims 1 to 10.
CN201910619488.9A 2019-07-10 2019-07-10 Driving auxiliary image generation method and system Active CN112208438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910619488.9A CN112208438B (en) 2019-07-10 2019-07-10 Driving auxiliary image generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910619488.9A CN112208438B (en) 2019-07-10 2019-07-10 Driving auxiliary image generation method and system

Publications (2)

Publication Number Publication Date
CN112208438A CN112208438A (en) 2021-01-12
CN112208438B true CN112208438B (en) 2022-07-29

Family

ID=74047399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910619488.9A Active CN112208438B (en) 2019-07-10 2019-07-10 Driving auxiliary image generation method and system

Country Status (1)

Country Link
CN (1) CN112208438B (en)

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2267656A3 (en) * 1998-07-31 2012-09-26 Panasonic Corporation Image displaying apparatus und image displaying method
JP3895238B2 (en) * 2002-08-28 2007-03-22 株式会社東芝 Obstacle detection apparatus and method
JP4404103B2 (en) * 2007-03-22 2010-01-27 株式会社デンソー Vehicle external photographing display system and image display control device
TW201103787A (en) * 2009-07-31 2011-02-01 Automotive Res & Testing Ct Obstacle determination system and method utilizing bird's-eye images
JP5479956B2 (en) * 2010-03-10 2014-04-23 クラリオン株式会社 Ambient monitoring device for vehicles
JP2011205513A (en) * 2010-03-26 2011-10-13 Aisin Seiki Co Ltd Vehicle periphery monitoring device
CN102695037B (en) * 2011-03-25 2018-01-02 无锡维森智能传感技术有限公司 A kind of switching of vehicle-mounted multi-view camera picture and method for expressing
EP2523163B1 (en) * 2011-05-10 2019-10-16 Harman Becker Automotive Systems GmbH Method and program for calibrating a multicamera system
EP2554434B1 (en) * 2011-08-05 2014-05-21 Harman Becker Automotive Systems GmbH Vehicle surround view system
KR20130053605A (en) * 2011-11-15 2013-05-24 현대자동차주식회사 Apparatus and method for displaying around view of vehicle
TWI505958B (en) * 2012-10-26 2015-11-01 Intuitive energy - saving driving aids and systems
CN103885573B (en) * 2012-12-19 2017-03-01 财团法人车辆研究测试中心 The auto-correction method of automobile-used display system and its system
DE102013220005A1 (en) * 2013-10-02 2015-04-02 Continental Automotive Gmbh Method and device for displaying the environment of a vehicle and driver assistance system
TW201601952A (en) * 2014-07-03 2016-01-16 Univ Shu Te 3D panoramic image system using distance parameter to calibrate correctness of image
JP2016063390A (en) * 2014-09-18 2016-04-25 富士通テン株式会社 Image processing system and image display system
JP6528382B2 (en) * 2014-10-22 2019-06-12 株式会社Soken Vehicle Obstacle Detection Device
JP6554866B2 (en) * 2015-03-30 2019-08-07 アイシン精機株式会社 Image display control device
CN205039930U (en) * 2015-10-08 2016-02-17 华创车电技术中心股份有限公司 Three -dimensional driving image reminding device
TWI613106B (en) * 2016-05-05 2018-02-01 威盛電子股份有限公司 Method and apparatus for processing surrounding images of vehicle
CN107054223A (en) * 2016-12-28 2017-08-18 重庆长安汽车股份有限公司 It is a kind of to be shown based on the driving blind area that panorama is shown and caution system and method
CN108765496A (en) * 2018-05-24 2018-11-06 河海大学常州校区 A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
US20190141310A1 (en) * 2018-12-28 2019-05-09 Intel Corporation Real-time, three-dimensional vehicle display

Also Published As

Publication number Publication date
CN112208438A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
JP6675448B2 (en) Vehicle position detecting method and device
CN108496178B (en) System and method for estimating future path
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
KR100414708B1 (en) Picture composing apparatus and method
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
JP6819681B2 (en) Imaging control devices and methods, and vehicles
JP6819680B2 (en) Imaging control devices and methods, and vehicles
EP2071491B1 (en) Stereo camera device
JP4425495B2 (en) Outside monitoring device
WO2019192359A1 (en) Vehicle panoramic video display system and method, and vehicle controller
JP6257989B2 (en) Driving assistance device
JP5455124B2 (en) Camera posture parameter estimation device
WO2019192145A1 (en) Method and apparatus for adjusting field of view of panoramic image, storage medium, and electronic device
CN104802710B (en) A kind of intelligent automobile reversing aid system and householder method
JP6014433B2 (en) Image processing apparatus, image processing method, and image processing system
JP2009060499A (en) Driving support system, and combination vehicle
JP2000128031A (en) Drive recorder, safety drive support system, and anti- theft system
JP2004240480A (en) Operation support device
WO2016129552A1 (en) Camera parameter adjustment device
JP4848644B2 (en) Obstacle recognition system
KR20190067578A (en) Collision warning device and method using heterogeneous cameras having overlapped capture area
KR102175947B1 (en) Method And Apparatus for Displaying 3D Obstacle by Combining Radar And Video
CN101938635B (en) Composite image-type parking assisting system
CN112208438B (en) Driving auxiliary image generation method and system
JP2004040523A (en) Surveillance apparatus for vehicle surroundings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant