CN115303181A - Method and device for assisting a driver in monitoring the environment outside a vehicle - Google Patents

Method and device for assisting a driver in monitoring the environment outside a vehicle Download PDF

Info

Publication number
CN115303181A
CN115303181A CN202211106067.4A CN202211106067A CN115303181A CN 115303181 A CN115303181 A CN 115303181A CN 202211106067 A CN202211106067 A CN 202211106067A CN 115303181 A CN115303181 A CN 115303181A
Authority
CN
China
Prior art keywords
vehicle
driver
angle
viewing angle
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211106067.4A
Other languages
Chinese (zh)
Inventor
K·普特舍尔
A·柯卡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Mercedes Benz Group AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mercedes Benz Group AG filed Critical Mercedes Benz Group AG
Priority to CN202211106067.4A priority Critical patent/CN115303181A/en
Publication of CN115303181A publication Critical patent/CN115303181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention relates to the field of driving assistance technology. The invention provides a method for assisting a driver of a vehicle in monitoring an environment external to the vehicle, comprising the steps of: s1: capturing an image of an environment external to the vehicle with a camera disposed on the vehicle; s2: acquiring a pitching state of the vehicle during running; s3: acquiring the posture of a driver of a vehicle; s4: the captured image of the environment outside the vehicle is output on a display unit of the vehicle while switching between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of a camera, based on a pitch state of the vehicle during traveling and a posture of a driver. The invention also relates to a device for assisting a driver of a vehicle in monitoring an environment outside the vehicle and to a machine-readable storage medium. In the invention, the output visual angle of the camera is modified through software, so that the observation visual field can be customized according to different road conditions and user postures, and the user experience is greatly improved.

Description

Method and device for assisting a driver in monitoring the environment outside a vehicle
Technical Field
The invention relates to a method for assisting a driver in monitoring an environment external to a vehicle, to a device for assisting a driver in monitoring an environment external to a vehicle, and to a machine-readable storage medium.
Background
With the rise of parking buildings and underground garages in cities, vehicles often need to pass through a section of continuous ramp when entering and exiting parking areas, however, when the vehicles are about to run to the top of a slope or suddenly enter a descending channel from a flat road surface, the view of drivers is limited due to the obstruction of an engine hood. In addition, when the vehicle travels on off-road terrain or steep hills, a blind field of view is also formed in a specific area in front of the vehicle, which not only causes a safety hazard, but also influences the driving experience.
For this reason, a blind area obstacle avoidance function for a slope road section is proposed in the prior art, and when a vehicle enters the slope road section, a suitable blind area image is provided by controlling the translational/rotational movement of the vehicle-mounted camera. However, the above solution still has many disadvantages, and in particular, in order to implement the mechanical movement of the camera, a moving track and a driving mechanism need to be deployed on the vehicle, which not only means higher hardware overhead, but also has poor stability of the camera, and is easy to damage during the movement.
In this context, it is desirable to provide a blind spot assistance scheme implemented by software to provide a driver with a desired road surface view through a more reliable image processing manner.
Disclosure of Invention
It is an object of the present invention to provide a method for assisting a driver in monitoring an environment external to a vehicle, an apparatus for assisting a driver in monitoring an environment external to a vehicle and a machine-readable storage medium, which solve at least some of the problems of the prior art.
According to a first aspect of the present invention, there is provided a method for assisting a driver of a vehicle in monitoring an environment external to the vehicle, the method comprising the steps of:
s1: capturing an image of an environment external to the vehicle with a camera disposed on the vehicle;
s2: acquiring a pitching state of the vehicle during running;
s3: acquiring the posture of a driver of a vehicle; and
s4: the captured image of the environment outside the vehicle is output on the display unit of the vehicle while switching between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of the camera, based on the pitch state of the vehicle during traveling and the posture of the driver.
The invention comprises in particular the following technical concepts: the visual angle of the camera is modified by software in the image output link, so that the camera does not need to be moved for obtaining different observation visual fields, and the diversified road surface observation requirements under different driving conditions are met. Overall, the deployment overhead of the camera drive mechanism is reduced and the driving safety is improved. In addition, the posture of a driver is taken into consideration when the output visual angle of the camera is modified, so that the observation visual field can be customized for different users or different road conditions more individually, and the user experience is greatly improved.
Optionally, in the different display modes, the captured images of the environment external to the vehicle are output at least at different vertical perspectives of the camera. It has been recognized that in different scenes, blind areas may be formed at different positions in the vehicle height direction due to the vehicle body structure occlusion, and by varying the angle of view with emphasis in this direction, it is possible to achieve minimization of the image processing overhead while allowing the compensation view image to sufficiently cover the range of the blind areas.
Optionally, the degree to which the viewing angle corresponding to the display mode to be switched is close to the ground is determined in positive correlation with the pitch angle of the vehicle. The size of angle of pitch can reflect the driver field of vision by the relation of sheltered zone for horizontal ground to a certain extent, through this kind of dynamic adaptation, can make the driver more accurate know the obstacle information who is close to the road surface.
Optionally, the extent of the viewing angle corresponding to the display mode to be switched in the direction perpendicular to the optical axis of the camera is determined in relation to the sitting upright degree of the driver, the height of the driver and/or the eye height of the driver. Therefore, no matter how the user is in posture, the view capable of fully covering the blind area of the visual field can be provided, and the driving safety is further improved.
Optionally, the timing of the switching between the different display modes is controlled based on the posture of the driver, wherein the switching between the different display modes is triggered earlier as the sitting upright of the driver, the height of the driver and/or the eye position of the driver is lowered. Therefore, the user can be ensured to know the image in the blind area effectively in time under different postures.
Optionally, in a case where the vehicle has been switched into the determined display mode with a first view angle transformation width based on the pitch state of the vehicle, the view angle corresponding to the determined display mode is dynamically adjusted with a second view angle transformation width based on the posture change of the driver, wherein the first view angle transformation width is larger than the second view angle transformation width. Compared with the change of the driving road condition, the change of the posture of the driver is more uncertain, the expected display mode can be roughly determined according to the road state, and then the preset visual angle is corrected or adjusted according to the posture which changes continuously, so that the time for selecting the visual angle during mode switching is saved, and the dynamic adaptation of the image display effect and the posture of the driver is ensured.
Optionally, the camera is fixedly arranged on the vehicle and has a preset viewing angle, wherein in at least one display mode an image of the environment outside the vehicle is output at the preset viewing angle, wherein in at least another display mode a partial viewing angle is cut out from the preset viewing angle and an image of the environment outside the vehicle is output at the cut out partial viewing angle. Therefore, the number of the vehicle-mounted cameras and the hardware deployment cost are reduced.
Optionally, the captured image of the environment outside the vehicle is output at a first viewing angle in the first display mode if the pitch state of the vehicle does not satisfy a preset condition, and the captured image of the environment outside the vehicle is output at a second viewing angle in the second display mode if the pitch state of the vehicle satisfies the preset condition, the second viewing angle including an additional viewing angle that expands downward relative to the first viewing angle at least in a direction perpendicular to the optical axis of the camera. By strictly regulating the trigger condition of mode switching, the excessively frequent large-amplitude visual angle change and shaking can be effectively prevented, and the user experience is improved.
Optionally, the first view angle corresponds to a narrow angular range, the second view angle corresponds to a wide angular range, the second view angle covers the first view angle; alternatively, the first viewing angle corresponds to a narrow angular range, the second viewing angle corresponds to another narrow angular range, and the first viewing angle does not overlap or only partially overlaps with the second viewing angle. By means of a proper image processing means, a diversified display scheme is provided, and hardware assembly cost and parameter requirements of the vehicle-mounted camera are further reduced.
Alternatively, in the first display mode, the first angle of view is adjusted based on the pitch angle of the vehicle and/or the posture of the driver in a first angle section with reference to the first angle of view, and in the second display mode, the second angle of view is adjusted based on the pitch angle of the vehicle and/or the posture of the driver in a second angle section with reference to the second angle of view. The visual angle adjusting range and the reference are limited through the determined display mode, so that the dynamic adaptation of the road picture to the driving condition and the posture of a driver is ensured, the visual sense does not give too frequent or abrupt visual angle changing feeling, and the comfort degree of a user is increased while the safety is ensured.
Optionally, satisfying the preset condition includes:
the vehicle is transited from the first road section to the second road section, and the pitch angle of the vehicle when the vehicle runs on the first road section is larger than that when the vehicle runs on the second road section;
the pitch angle of the vehicle is greater than a preset threshold value; and/or
The vehicle travels in an off-road section that causes a continuous change in the pitch angle. By predefining the mode switching condition, blind area assistance can be provided for the driver more accurately, and visual interference or discomfort caused to the driver by repeatedly switching the mode or the viewing angle is also avoided.
Optionally, in a case where a predefined change occurs in a sight-line direction of a driver of the vehicle, a current angle of view for outputting an image of an environment outside the vehicle is extended in a direction in which the predefined change occurs in the sight-line direction. Therefore, the road picture required in real time can be accurately developed according to the requirements of the user, and the driver can experience the dynamic effect that the displayed image changes along with the sight line of the driver, so that the electronic display mode is closer to the real observation effect.
Optionally, the posture of the driver is obtained by:
estimating the sitting posture, the height and/or the eye position of the driver according to the seat pose of the driver;
detecting an eye position and/or a sight direction of a driver by means of an optical sensor arranged in a cabin of a vehicle; and/or
And reading a pre-stored gesture for the driver by recognizing the identity information of the driver. By providing multiple ways for acquiring the user posture, the application scene of the blind area auxiliary scheme is favorably expanded, and the blind area auxiliary scheme can be flexibly adapted to different vehicle configurations.
Optionally, the method further comprises the steps of: travel speed information of the vehicle is acquired, and switching between different display modes is additionally controlled based on the travel speed information. In some scenes, the reduction of the driving speed of the vehicle can reflect the occurrence of blind areas or dangerous road conditions from the side. By additionally considering the vehicle speed, the timing and condition of the mode switching can be grasped more accurately.
Optionally, the method further comprises the steps of: steering angle information of the vehicle is acquired, and switching between different display modes is additionally controlled based on the steering angle information.
Alternatively, in different display modes, the captured images of the environment outside the vehicle are output at different horizontal perspectives of the camera. Due to the vehicle body structure limitation and the existence of the A column, the side view of the vehicle is limited in many cases, and the driver can be assisted to obtain a larger view range in the horizontal direction by additionally considering the steering angle information, so that the driving safety is further improved.
Alternatively, in different display modes, the captured image of the environment outside the vehicle is output with different distortion correction degrees. If the on-vehicle camera that adopts is wide angle lens, can aggravate the distortion effect in the vision when only intercepting partial visual angle, through providing linear correction in specific mode, can promote user's vision comfort level.
According to a second aspect of the present invention, there is provided an apparatus for assisting a driver of a vehicle in monitoring an environment external to the vehicle, the apparatus being for performing the method according to the first aspect of the present invention, the apparatus comprising:
an image acquisition module configured to receive an image of an environment outside a vehicle captured by means of a camera arranged on the vehicle;
a first acquisition module configured to be able to acquire a pitch state of the vehicle during traveling;
a second acquisition module configured to be able to acquire a posture of a driver of the vehicle; and
a control module configured to be able to output the captured image of the environment outside the vehicle on a display unit of the vehicle, switching between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of the camera, based on a pitch state of the vehicle during traveling and a posture of the driver.
According to a third aspect of the present invention, there is provided a machine-readable storage medium having stored thereon a computer program for performing the method according to the first aspect of the present invention when run on a computer.
Drawings
The principles, features and advantages of the present invention may be better understood by describing the invention in more detail below with reference to the accompanying drawings. The drawings comprise:
FIG. 1 shows a block diagram of an apparatus for assisting a driver in monitoring an environment external to a vehicle, according to an exemplary embodiment of the present invention;
FIG. 2 shows a flow chart of a method for assisting a driver in monitoring an environment external to a vehicle according to an exemplary embodiment of the present invention;
FIG. 3 illustrates, in an exemplary embodiment, a flow chart of one method step of the method shown in FIG. 2;
FIG. 4 shows a flow chart of one method step of the method shown in FIG. 2 in another exemplary embodiment;
5a-5d show schematic diagrams of display modes determined at different pitch states of the vehicle;
fig. 6 shows a schematic view of the display mode determined at different postures of the driver.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and exemplary embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
Fig. 1 shows a block diagram of an arrangement for assisting a driver in monitoring the environment outside a vehicle according to an exemplary embodiment of the present invention.
With reference to fig. 1, the device 1 for assisting a driver in monitoring the environment outside a vehicle comprises an image acquisition module 10, a first acquisition module 21, a second acquisition module 22 and a control module 30, which are connected to each other in terms of communication technology.
The image acquisition module 10 is for example connected to or comprises at least one camera of the vehicle in order to receive images of the environment outside the vehicle taken by the camera. In one example, the camera is a wide-angle camera and is fixedly mounted at a radiator grill at the front of the vehicle so that its preset viewing angle can cover the road surface within a certain range of the front of the vehicle. In another example, the image acquisition module 10 may also be connected to one or more side view cameras located at the side of the vehicle (e.g., at the rearview mirror housing) so as to be able to receive images of the road environment in front of the side/side of the vehicle. The camera of the vehicle may, for example, continuously generate a continuous stream of images at a particular imaging rate, thereby providing real-time video images of the environment external to the vehicle.
The first obtaining module 21 is used for obtaining the pitch state of the vehicle during driving, and for this purpose, the first obtaining module 21 may be configured as various pitch sensors such as a gyroscope, an accelerometer, a tilt sensor, or any combination thereof. Additionally, the first obtaining module 21 may further include, for example, a distance measuring sensor mounted on the vehicle body, which measures the distance between a fixed point on the vehicle body and the ground by using the emitted laser ray or radar electromagnetic wave, so as to determine the change of the driving road surface state of the vehicle, and further deduce the potential variation trend of the pitch state of the vehicle. It is also possible that the first acquisition module 21 comprises an optical sensor in order to verify the pitch state of the vehicle by means of image recognition techniques. The specific configuration of the first acquisition module 21 and the class of sensors on which it is based can be flexibly adjusted according to actual cost or accuracy requirements.
The second acquisition module 22 is used for acquiring the posture of the driver of the vehicle. For this purpose, the second acquisition module 22 is connected, for example, to a seat position sensor of the vehicle in order to deduce the sitting position or height of the driver from the position of the seat in the cabin in the horizontal or vertical direction and the inclination of the backrest. Furthermore, the second obtaining module 22 may also be connected to a pressure sensor provided in the seat bottom or the backrest, so that the degree of deviation of the sitting posture of the driver from the upright sitting posture can also be more accurately judged in combination with the pressure information. By knowing the sitting posture or the height of the driver, the eye position of the driver can be estimated, and accordingly the size of the blind vision area of the driver in different pitching states of the vehicle can be calculated. In addition, the second acquisition module 21 is also connected to, for example, a sight line tracking device disposed in the vehicle cabin so as to be able to perform detection and tracking of the head posture, pupil position of the driver, and thereby determine the sight line direction of the driver at each time.
In the embodiment shown in fig. 1, an optional third acquisition module 23 and a fourth acquisition module 24 are also shown, which are assigned and configured to acquire the travel speed of the vehicle and to acquire the steering angle of the vehicle. The third acquisition 22 is, for example, connected to or configured as a wheel speed sensor of the vehicle, so that information about the travel speed of the vehicle can be collected. The fourth acquisition module 24 is connected to, for example, a steering wheel of the vehicle, so that the steering angle of the vehicle can be directly obtained from the torsion angle of the steering wheel.
In the embodiment shown in fig. 1, the control module 30 further includes, for example, a calculation unit 31 and a pattern database 32, and the calculation unit 31 calculates the driver blind zone position and size using the vehicle pitch state data, the driver attitude data, the vehicle travel speed data, and the vehicle steering angle data from the respective acquisition modules 21, 22, 23, 24 shown in fig. 1, and thereby derives the desired observation angle under different conditions. In the pattern database 32, a plurality of display patterns are stored in association with different viewing angles or viewing angle ranges, so that the calculation unit 31 can select a corresponding display pattern from the pattern database 32 according to a desired viewing angle and perform modification on the original output of the in-vehicle camera according to the display pattern. In another example, the control module 30 may also include a trained machine learning model that is capable of determining an appropriate viewing angle range for the driver based on predetermined inputs received from the respective acquisition modules 21, 23, 22, 24 and outputting a corresponding display mode accordingly. In the embodiment shown in fig. 1, the device 1 optionally also comprises a manual switch 25, by means of which manual switch 25 the driver can also autonomously select whether the view aid function is to be activated or whether it is to be switched into a particular display mode.
In order to present the captured image of the environment outside the Vehicle to the driver at a suitable viewing angle, the control module 30 is also connected to a Display Unit 40 of the Vehicle, the Display Unit 40 including, for example, but not limited to, a Heads-up Display (HUD), an entertainment system Display (HU: head Unit), an Instrument panel (IC: instrument Cluster), and a Centralized In-Vehicle Display (CIVIC: centralized In-Vehicle Integration Computer) of the Vehicle, etc. Thus, the driver can know the obstacle information in the blind area from the images output in different display modes on the display unit 40, and take corresponding measures to control the operation of the vehicle.
Fig. 2 shows a flowchart of a method for assisting a driver in monitoring an environment outside a vehicle according to an exemplary embodiment of the invention. The method exemplarily comprises the steps S1-S4 and may for example be implemented using the device 1 shown in fig. 1.
In step S1, an image of the environment outside the vehicle is captured by means of a camera arranged on the vehicle. The vehicle may comprise at least one camera, for example having a preset viewing angle range and being fixedly mounted under a front bumper of the vehicle (e.g. at a radiator grill), on the roof, near the lights or elsewhere in the vehicle. The environment outside the vehicle is in particular the traffic environment in front of the vehicle, but in some application scenarios this also includes the traffic environment on the sides of the vehicle as well as the traffic environment behind the vehicle.
In step S2, the pitch state of the vehicle during running is acquired. As an example, the pitch angle of the vehicle may be measured directly by means of at least one type of pitch sensor of the vehicle. As another example, the pitch state of the vehicle may also be indirectly inferred by detecting the road surface condition via a ranging sensor or image recognition techniques. It is apparent here that when the vehicle is running on a level road, the pitch angle thereof is, for example, approximately 0 °, and as the gradient of the road on which the vehicle is running increases, the pitch angle thereof also increases in the positive direction, and when the vehicle is in a downhill running state, the pitch angle thereof has a negative value.
In step S3, the posture of the driver of the vehicle is acquired. On the one hand, the posture of the driver includes a sitting posture, a gesture, an action, a head posture, a viewpoint position and a sight line direction of the driver, and on the other hand, the posture of the driver also includes a height, an eye position and a head position of the driver. By knowing the information, the corresponding different view blind areas can be determined for different drivers, and the view blind area change can be calculated in a self-adaptive manner according to the posture change of the same driver. For example, the posture of the driver can be acquired by:
-estimating the driver's sitting posture, height and/or eye position from the driver's seat pose;
-detecting the eye position and/or the direction of sight of the driver by means of an optical sensor arranged in the cabin; and/or
-reading a pre-stored gesture for the driver by recognizing the driver's identity information.
In step S4, the captured image of the environment outside the vehicle is output on the display unit of the vehicle while switching between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of the camera, based on the pitch state of the vehicle during traveling and the posture of the driver.
To implement the perspective transformation by software means, the native camera output can be modified (e.g., masked or clipped) by image cropping or pan and rotate techniques such that the image presented to the driver is only a portion of the camera's original output. In the different display modes, for example, the captured images of the environment outside the vehicle are output at least at different vertical viewing angles with respect to the camera. This means that, if the camera of the vehicle is configured as a wide-angle camera and has a vertical angle of view of 180 degrees, in at least one display mode an image of the surroundings outside the vehicle can be output directly with a vertical angle of view of 180 degrees, while in at least another display mode a partial angle of view of 90 degrees can be cut out from the vertical angle of view of 180 degrees and an image of the surroundings outside the vehicle can be output with the cut-out partial angle of view.
In addition, in different display modes, the shot images of the external environment of the vehicle can be output with different horizontal visual angles, different distortion correction degrees, different zoom degrees, different brightness, different resolutions, different contrasts and different definitions of the camera, so that the road surface images in the blind areas can be better presented to the driver. For example, if the vehicle is traveling in off-road terrain and the weather conditions are severe, it may be more advantageous to increase the display brightness and contrast while changing the display viewing angle. For another example, if switching from the wide-angle mode back to the narrow-angle mode is made based on the pitch state and the posture of the driver, linear correction should be performed on the image in the narrow-angle mode by means of an image processing algorithm to eliminate distortion.
In this step, in addition to the pitch state and the driver's posture of the vehicle, switching between different display modes may be additionally controlled based on the traveling speed of the vehicle and the steering angle. For example, when the steering wheel reaches a predetermined limit position clockwise or counterclockwise, the visual field range in the current display mode may be automatically expanded to the left or right by an additional visual field range to provide more complete road surface information to the driver in the horizontal direction. As another example, in the case where the speed of the vehicle is below a limit value (e.g., 15 km/h) and the pitch angle of the vehicle is continuously changed, it is possible to switch into the wide-angle mode.
When switching between different display modes is controlled based on the pitch state of the vehicle and the posture of the driver, a variety of mode switching mechanisms can be selected according to the use scene and the use requirement. For example, the desired viewing angle of the driver can be calculated directly from the pitch angle and the driver attitude of the vehicle, and the display mode corresponding thereto can be found. In this case, the image is output in the first display mode when the calculated driver's desired observation angle is within the first angle section, and is automatically switched from the first display mode to the second display mode when the calculated driver's desired observation angle falls within the second angle section out of the first angle section. In addition, the display mode to be switched can be roughly determined according to the pitching state of the vehicle, and then fine adjustment is performed on the visual angle corresponding to the display mode according to the height/sitting posture of the driver.
As an example, the degree to which the viewing angle corresponding to the display mode to be switched is close to the ground may be determined in positive correlation with the pitch angle of the vehicle. In other words, as the vehicle pitch angle increases, the viewing angle is pivoted further down about the camera optical axis so that this viewing angle is directed to the ground to a greater extent.
As another example, the extent of the viewing angle corresponding to the display mode to be switched in the direction perpendicular to the optical axis of the camera may be determined in negative correlation with the sitting upright degree of the driver, the height of the driver, and/or the eye height of the driver. If the driver's back is more inclined forward/backward than a seated driver, his eye-off height is also reduced accordingly, which results in a greater blind spot in the front of the vehicle. Therefore, by increasing the visual angle extension range in the direction perpendicular to the optical axis of the camera, dynamic compensation can be performed for the enlargement of the blind area due to the change in the posture of the driver.
As another example, the timing of switching between different display modes may also be controlled based on the attitude of the driver. It will be appreciated that a driver of lower height will have a greater blind zone of view in front of it than a driver of higher height, which also means a greater potential threat. Therefore, as the driver's sitting posture stands upright, the driver's height, and/or the driver's eye position decreases, switching between different display modes should be triggered earlier so that a driver of a shorter size can avoid the risk earlier.
Fig. 3 shows, in an exemplary embodiment, a flow chart of one method step of the method shown in fig. 2. In the embodiment shown in fig. 3, the method step S4 in fig. 2 comprises, for example, sub-steps S401-S407.
In step S401, it is checked whether a condition for turning on the front visual field compensation function is satisfied. For example, such a check may be performed in conjunction with the pitch state of the vehicle, i.e., whether the vehicle is in an uphill driving state may be checked. Additionally, it may also be checked in this step whether the driver manually activates the visibility compensation function. Further, such determination may also be performed directly according to the ignition state of the vehicle, that is, indicating that the on condition of the road surface visual field assist is satisfied once the vehicle is ignited or started.
If it is determined that the above condition is not met, the image of the environment outside the vehicle captured by the vehicle camera is not presented to the driver in step S402. This also means that the image is not output in any display mode.
If the condition for turning on the forward visual field compensation function is satisfied, the photographed image of the environment outside the vehicle is output at the first angle of view in the first display mode M1 in step S403. Here, the first display mode M1 corresponds to, for example, a "normal mode" or a "linear mode" in which a front road surface image is displayed in a standard lens focal length and image distortion is removed as much as possible by an algorithm in the mode M1.
In step S404, correction is performed on the first display mode M1 by the posture of the driver. For example, it is initially preliminarily determined that an image should be output in the first display mode M1 at the first angle of view F1 in accordance with the pitch state or the ignition state of the vehicle. Then in this step the first viewing angle F1 can be adjusted within a small range taking into account the driver posture (e.g. height, sitting posture) to fit the determined viewing angle range to the driver posture. The first viewing angle F1 becomes, for example, F11 after being adjusted, and accordingly, the first display mode M1 corresponding thereto is corrected to M11. Then, the driver is finally provided with an image of the road environment ahead in the corrected first display mode M11.
In step S405, it is checked whether the pitch state of the vehicle satisfies a preset condition. Such a check may be performed, for example, in terms of the pitch angle of the vehicle. Here, satisfying the preset condition includes:
-the vehicle transitions from a first road segment to a second road segment, the vehicle having a greater pitch angle when driving on the first road segment than when driving on the second road segment;
-the pitch angle of the vehicle is greater than a preset threshold; and/or
-the vehicle is travelling in an off-road section causing a continuous change in the pitch angle.
If the above-described preset condition is found not to be satisfied, that is, if the vehicle is currently traveling on a horizontal road, it is possible to continue to remain outputting the front road surface image to the driver in the initially determined (corrected) first display mode M11.
If the pitch state is found to satisfy the preset condition, the photographed image of the environment outside the vehicle may be output in the second display mode M2 at the second view angle F2 in step S406. Here, the second display mode M2 corresponds to, for example, a "wide-angle mode" or an "off-road mode", and the second viewing angle F2 is expanded by an additional viewing angle at least downward in a direction perpendicular to the optical axis of the camera head as compared to the first viewing angle F1. As an example, if the first viewing angle F1 in the first display mode M1 corresponds to a narrow angular range of 60 °, the second viewing angle F2 in the second display mode M2 corresponds to a wide angular range of 120 °, for example, in which case the second viewing angle F2 may cover the first viewing angle F1. As another example, if the first viewing angle F1 of the first display mode M1 corresponds to a narrow angular range of 60 °, the second viewing angle F2 of the second display mode M2 may correspond to another narrow angular range of also 60 °, when the second viewing angle F2 is pivoted with respect to the first viewing angle F1 about a reference point, so that the first viewing angle F1 and the second viewing angle F2 do not overlap or only partially overlap.
In step S407, similarly to step S403, correction may be performed on the second display mode M2 by the posture of the driver. For example, the second angle of view F2 may be adjusted to a small extent in a second angle range with reference to the second angle of view F2 according to the height of the driver, the sitting posture, the eye height, and the like. After the correction based on the posture of the driver is completed, the second view angle F2 becomes F21, for example, and the second mode M2 becomes M21. Finally, the driver is provided with an image of the road environment ahead in the modified second viewing angle F21 in the modified second display mode M21.
In addition to taking into account the driver attitude by adjusting the viewing angle and modifying the display mode, the timing of the trigger switch can be controlled based on the driver attitude when switching between different display modes. For example, for a driver of shorter stature, switching between different display modes may be triggered earlier, whereas for a driver of higher stature, switching between different display modes may be triggered later.
Fig. 4 shows a flow chart of one method step of the method shown in fig. 2 in another exemplary embodiment. In the embodiment shown in fig. 4, the method step S4 in fig. 2 comprises, for example, sub-steps S410-S450.
In step S410, the captured image of the environment outside the vehicle is output at the first angle of view F1 in the first display mode M1, and the second display mode M2 corresponds to the second angle of view F2, based on the pitch state of the vehicle during traveling and the posture of the driver, and is switched from the first display mode M1 to the second display mode M2. Here, for example, mode switching is performed at the first view angle conversion amplitude Δ F.
In step S420, while the captured image is output in the second display mode M2, it is checked whether the pitch state of the vehicle and the posture of the driver have changed to a predetermined degree. For example, it is possible to check whether the line of sight direction and the sitting posture of the driver have changed to a predetermined degree. In addition, it is also possible to check whether the pitch angle of the vehicle fluctuates to a predetermined degree.
If such a change is not detected in step S420, the current second display mode M2 is not changed, and the second viewing angle F2 for outputting the image of the environment outside the vehicle in the second display mode M2 is not changed.
If such a change is detected in step S420, it is further checked in step S430 whether such a change exceeds a limit value.
If the limit value is not exceeded, it indicates that the pitch angle of the vehicle and the posture of the driver are changed only slightly, and not to such an extent that a change in the display mode needs to be caused. In this case, the second viewing angle F2 in the second display mode M2 may be dynamically adjusted at the second viewing angle conversion amplitude Δ F' having a smaller amplitude while still remaining in the second display mode M2 in step S440. The first view transition magnitude af is significantly larger than the second view transition magnitude af', i.e. the second view F2 is only fine-tuned with a small magnitude in terms of angular position or extension.
If it is found in step S430 that the change in the pitch angle or the driver 'S posture of the vehicle exceeds the limit value, switching is made from the second display mode M2 into the third display mode M3 in step S450, at which time, for example, mode switching is performed with a third viewing angle change width Δ F "that is larger than the second viewing angle change width Δ F'. This means that, since the shape of the road on which the vehicle is traveling suddenly changes, or since the driver status is temporarily switched, the specific section based on the second view angle F2 in the second display mode M2 can no longer completely cover the blind area range of the driver, and thus adjustment should be performed for such a reference.
Fig. 5a-5d show schematic diagrams of display modes determined in different pitch states of the vehicle.
The camera 51 of the vehicle 100 is configured to take an image of a road surface ahead, and the camera 51 is, for example, a wide-angle camera and has a preset angle of view F0. In the embodiment shown in fig. 5a to 5d, an appropriate display mode may be determined according to the pitch angle of the vehicle 100 during traveling, in which a partial viewing angle F1, F2 is cut from a preset viewing angle F0 of the camera 51, and then a photographed image of the road surface ahead is output on a display unit (not shown) of the vehicle 100 at the cut partial viewing angle F1, F2.
As shown in fig. 5a, the vehicle 100 is traveling on a horizontal road surface (e.g., with a slope of 0 °). At this time, the pitch angle of the vehicle 100 may be determined to be 0 °. The field of view FS of the driver 200 can be determined from the angle of incidence at the edge of the hood of the vehicle 100 from the eyes of the driver 200. It can be seen that the driver 200 can well observe the area directly in front of the vehicle 100 when the vehicle 100 is running on a horizontal road surface. At this time, the image captured by means of the camera 51 is presented to the driver 200 at a first viewing angle F1, for example, in a first display mode, the first viewing angle F1 being selected as a narrow angular range of 60 °. To generate the view viewed from the first angle of view F1, the original wide-angle output of the camera 51 is modified, for example, by a cropping and distortion technique, so that a narrow-angle image corresponding to the first angle of view F1 is cut and output to the driver 200.
As shown in FIG. 5b, the vehicle 100 is traveling over rough off-road terrain, during which the pitch angle of the vehicle 100 repeatedly changes as the grade of the ground changes. Especially when the vehicle 100 transitions from an uphill slope to a downhill slope, the driver 200 cannot see neither the horizontal front area nor the road surface condition on the following downhill road section because the field of view FS of the driver 200 is blocked by the hood of the vehicle 100. In this case, in order to provide the driver 200 with an optimal view of the road shape ahead, for example, a switch is made from the first display mode to the second display mode, in which the image captured by means of the camera 51 is displayed to the driver 200 at a second viewing angle F2, which second viewing angle F2 is selected, for example, as a wide-angle range of 100 ° and which extends with respect to the first viewing angle F1 in a direction perpendicular to the optical axis of the camera down to an additional viewing angle range. Thus, a greater viewing angle in the vertical direction through the camera may allow the driver 200 to observe more terrain ahead. In this way, the driver 200 can well estimate the steepness of the downhill section from the supplementary view provided on the display unit and can also well recognize and avoid the obstacle.
In the embodiment shown in fig. 5c and 5d, the vehicle 100 travels along an uphill road section with a relatively gentle gradient and a relatively steep gradient, respectively, and the vehicle 100 has a pitch angle of, for example, 30 ° in fig. 5c and 60 ° in fig. 5 d. It can be seen that the field of view FS of the driver 200 is more severely limited as the pitch angle during driving of the vehicle 100 increases. In order to adapt the supplementary view presented to the driver 200 well to such terrain variations, the transition from the first viewing angle F1 shown in fig. 1 to the second viewing angle F2', F2 "is shown in fig. 5c and 5d, respectively. Unlike the scenario shown in fig. 5b, in the scenarios shown in fig. 5c and 5d, the second viewing angles F2', F2 ″ no longer exist in the form of a wide angle range, but continue to exist in the form of a narrow angle range of 60 °, but after shifting into the second display mode, this narrow angle range is pivoted further downwards in the vertical direction. That is, the second viewing angles F2', F2 "are closer to the ground than the first viewing angle F1. It can also be seen that as the pitch angle of the vehicle 100 increases (road steepness increases), the second viewing angle F2 "is also directed towards the ground to a greater extent, thereby ensuring that the terrain in the region of the blind zone in front of the vehicle 100 is presented to the driver 200 more completely. It is also advantageous that the camera output can be modified according to the pitch state and the driver's posture so that the angle of view for outputting the captured image can be kept horizontal at all times regardless of the road morphology and the change in the driver's posture.
Fig. 6 shows a schematic view of the display mode determined at different postures of the driver.
A line of sight recognition unit 52 is arranged, for example, in the cabin of the vehicle 100, which line of sight recognition unit 52 is oriented and arranged such that the position or the line of sight direction of at least one eye of the driver 201, 202 can be detected and determined. According to one embodiment, the line-of-sight recognition unit 52 includes an in-vehicle camera and an infrared illumination unit through which infrared rays are emitted toward the face of the person. Then, a bright spot is generated in the pupil area of the eye of the person to be recognized, whereby the eye position of the driver can be determined by recognizing the bright spot in the captured image. Furthermore, the gaze direction may be determined by other image processing steps, such as feature extraction, determination of the pupil center.
In addition to the eye position of the driver detected by the sight line recognition unit 52, the sitting posture or the height can be recognized by detecting the position of the seat 60. In the vehicle 100, a reference position is preset, for example for the seat 60, which corresponds to a standard height (for example 1 meter 7) or a standard sitting position of the driver 201, 202. Then, by acquiring the inclination angle information of the backrest of the seat 60 and the displacement of the seat 60 in the horizontal/vertical direction, the degree of deviation of the seat 60 from the reference position can be known, and the degree of deviation of the visual field of the drivers 201 and 202 from the standard visual field can be estimated.
In fig. 6, the eye position and the gaze direction are determined for two drivers 201, 202 of different heights or different sitting postures, respectively, for example, by using the gaze recognition unit 52 and the seat position sensor. It can be seen that the first driver 201 is significantly taller in height than the second driver 202 in the seated state, and therefore has eye positions that are correspondingly taller than the eye positions of the second driver 202. Viewing angles F21, F22 for displaying an image of the road environment ahead in the vehicle are also shown for the different detected postures of the two drivers 201, 202, respectively. The direct front view of the second driver 202 is blocked by a vehicle body front structure (e.g., a hood) to a greater extent than that of the first driver 201, thereby causing a larger blind zone of view perpendicular to the vehicle traveling direction. Thus, as shown in fig. 6, the range of the observation angle of view F22 determined for the second driver 202 in the camera vertical direction is larger, which makes the compensation view presented to the driver 202 well cover its larger blind area range in the direction perpendicular to the vehicle travel direction.
In an embodiment, which is not shown, it is also possible that in the event of a predefined change of the direction of the line of sight of the driver 201, 202 of the vehicle 100, the current angle of view for outputting the image of the environment outside the vehicle can be extended in the direction of the predefined change of the direction of the line of sight. For example, when the drivers 201, 202 look to the left and the angle at which the vehicle 100 turns to the left exceeds the angle threshold, the viewing angle at which the image of the environment outside the vehicle is output on the display unit may be shifted to the left so that more road condition information, for example, obscured by the a-pillar on the left side of the vehicle, is presented to the driver.
Although specific embodiments of the invention have been described herein in detail, they have been presented for purposes of illustration only and are not to be construed as limiting the scope of the invention. Various substitutions, alterations, and modifications may be devised without departing from the spirit and scope of the present invention.

Claims (19)

1. A method for assisting a driver (200) of a vehicle (100) in monitoring an environment external to the vehicle, the method comprising the steps of:
s1: capturing an image of an environment outside the vehicle by means of a camera (51) arranged on the vehicle (100);
s2: acquiring a pitch state of a vehicle (100) during travel;
s3: acquiring a posture of a driver (200) of a vehicle (100); and
s4: based on the pitch state of the vehicle (100) during travel and the posture of the driver (200), the captured image of the environment outside the vehicle is output on a display unit (40) of the vehicle (100) in a switched manner between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of a camera (51).
2. The method according to claim 1, wherein in the different display modes the captured images of the environment outside the vehicle are output at least at different vertical viewing angles of the camera (51).
3. The method according to claim 1 or 2, wherein the proximity of the viewing angle to the ground corresponding to the display mode to be switched is determined positively correlated to the pitch angle of the vehicle (100).
4. A method according to any of claims 1 to 3, wherein the extension of the viewing angle corresponding to the display mode to be switched in a direction perpendicular to the optical axis of the camera is determined in relation to the sitting upright of the driver (200), the height of the driver (200) and/or the eye height of the driver (200).
5. The method according to any one of claims 1 to 4, wherein the timing of the switching between the different display modes is controlled based on the posture of the driver (200), wherein the switching between the different display modes is triggered earlier as the sitting position of the driver (200) stands upright, the height of the driver (200) and/or the eye position of the driver (200) decreases.
6. The method according to any one of claims 1 to 5, wherein, in case the switching into the determined display mode has been made with a first view transformation amplitude based on the pitch state of the vehicle (100), the view corresponding to the determined display mode is dynamically adjusted with a second view transformation amplitude based on the change of the attitude of the driver (200), wherein the first view transformation amplitude is larger than the second view transformation amplitude.
7. The method according to any one of claims 1 to 6, wherein the camera (51) is fixedly arranged on the vehicle (100) and has a preset viewing angle, wherein in at least one display mode an image of the environment outside the vehicle is output at the preset viewing angle, wherein in at least another display mode a partial viewing angle is taken from the preset viewing angle and an image of the environment outside the vehicle is output at the taken partial viewing angle.
8. The method according to any one of claims 1 to 7, wherein in case the pitch state of the vehicle (100) does not satisfy a preset condition, the captured image of the environment outside the vehicle is output in a first display mode at a first viewing angle, and in case the pitch state of the vehicle (100) satisfies the preset condition, the captured image of the environment outside the vehicle is output in a second display mode at a second viewing angle, the second viewing angle comprising an additional viewing angle that extends at least downwards in a direction perpendicular to the optical axis of the camera with respect to the first viewing angle.
9. The method of claim 8, wherein,
the first view angle corresponds to a narrow angle range, the second view angle corresponds to a wide angle range, and the second view angle covers the first view angle; or alternatively
The first viewing angle corresponds to a narrow angular range and the second viewing angle corresponds to another narrow angular range, the first viewing angle and the second viewing angle not overlapping or only partially overlapping.
10. The method according to any one of claims 1 to 9, wherein in the first display mode the first viewing angle is adjusted based on the pitch angle of the vehicle (100) and/or the attitude of the driver (200) within a first angle interval with reference to the first viewing angle, and in the second display mode the second viewing angle is adjusted based on the pitch angle of the vehicle (100) and/or the attitude of the driver (200) within a second angle interval with reference to the second viewing angle.
11. The method according to any one of claims 8 to 10, wherein meeting a preset condition comprises:
the vehicle (100) transitions from a first road segment to a second road segment, the vehicle (100) having a greater pitch angle when driving on the first road segment than when driving on the second road segment;
the pitch angle of the vehicle (100) is greater than a preset threshold value; and/or
The vehicle (100) travels in an off-road section that causes a continuous change in the pitch angle.
12. The method according to any one of claims 1 to 11, wherein in case of a predefined change of the gaze direction of a driver (200) of the vehicle (100), a current viewing angle for outputting an image of the environment outside the vehicle is extended in a direction of the predefined change of the gaze direction.
13. The method according to any one of claims 1 to 12, wherein the attitude of the driver (200) is acquired by:
estimating a sitting posture, a height and/or an eye position of the driver (200) from the seat pose of the driver (200);
detecting an eye position and/or a direction of sight of a driver (200) by means of an optical sensor arranged in a cabin of the vehicle (100); and/or
The gesture pre-stored for the driver (200) is read by recognizing the identity information of the driver (200).
14. The method according to any one of claims 1 to 13, wherein the method further comprises the steps of:
travel speed information of the vehicle (100) is acquired, and switching between different display modes is additionally controlled based on the travel speed information.
15. The method according to any one of claims 1 to 14, wherein the method further comprises the steps of:
steering angle information of the vehicle (100) is acquired, and switching between different display modes is additionally controlled based on the steering angle information.
16. The method according to any one of claims 1 to 15, wherein in different display modes the captured images of the environment outside the vehicle are output at different horizontal viewing angles of the camera (51).
17. The method according to any one of claims 1 to 16, wherein the captured image of the environment outside the vehicle is output with different distortion correction degrees in different display modes.
18. An apparatus (1) for assisting a driver (200) of a vehicle (100) in monitoring an environment external to the vehicle, the apparatus (1) being adapted to perform the method according to any one of claims 1 to 17, the apparatus (1) comprising:
an image acquisition module (10) configured to receive an image of an environment external to the vehicle captured by means of a camera (51) arranged on the vehicle (100);
a first acquisition module (21) configured to be able to acquire a pitch state of the vehicle (100) during travel;
a second acquisition module (22) configured to be able to acquire a posture of a driver (200) of the vehicle (100); and
a control module (30) configured to be able to output the captured image of the environment outside the vehicle on a display unit (40) of the vehicle (100) switchably between different display modes in which the captured image of the environment outside the vehicle is output at least at different angles of view of the camera (51), based on a pitch state of the vehicle (100) during travel and a posture of the driver (200).
19. A machine-readable storage medium, on which a computer program is stored for, when run on a computer, performing the method according to any one of claims 1 to 17.
CN202211106067.4A 2022-09-09 2022-09-09 Method and device for assisting a driver in monitoring the environment outside a vehicle Pending CN115303181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211106067.4A CN115303181A (en) 2022-09-09 2022-09-09 Method and device for assisting a driver in monitoring the environment outside a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211106067.4A CN115303181A (en) 2022-09-09 2022-09-09 Method and device for assisting a driver in monitoring the environment outside a vehicle

Publications (1)

Publication Number Publication Date
CN115303181A true CN115303181A (en) 2022-11-08

Family

ID=83867502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211106067.4A Pending CN115303181A (en) 2022-09-09 2022-09-09 Method and device for assisting a driver in monitoring the environment outside a vehicle

Country Status (1)

Country Link
CN (1) CN115303181A (en)

Similar Documents

Publication Publication Date Title
CN107444263B (en) Display device for vehicle
US11308720B2 (en) Vehicular imaging system
CN104163133B (en) Use the rear view camera system of position of rear view mirror
US9802538B2 (en) Method for providing driver information in a motor vehicle
US20190315275A1 (en) Display device and operating method thereof
EP1894779B1 (en) Method of operating a night-view system in a vehicle and corresponding night-view system
US11022795B2 (en) Vehicle display control device
EP1521059A1 (en) Route guidance apparatus, method and program
TW201726452A (en) Image display system for vehicle, and vehicle mounted with the image display system
US8477191B2 (en) On-vehicle image pickup apparatus
US20210331622A1 (en) Vehicular around view monitoring system through adjustment of viewing angle of camera, and method thereof
JP2017216509A (en) Display device for vehicle
JP2017056909A (en) Vehicular image display device
JP4110561B2 (en) Vehicle obstacle warning device
CN115303181A (en) Method and device for assisting a driver in monitoring the environment outside a vehicle
JP6361987B2 (en) Vehicle display device
JP4075172B2 (en) Vehicle obstacle warning device
JP2000185610A (en) Obstacle alarm device for vehicle
JP2017213934A (en) Vehicular display device
EP4304897A1 (en) Vehicle camera mirror system with adjustable video feed

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination