CN115675289B - Image display method and device based on driver visual field state in driving scene - Google Patents

Image display method and device based on driver visual field state in driving scene Download PDF

Info

Publication number
CN115675289B
CN115675289B CN202211720123.3A CN202211720123A CN115675289B CN 115675289 B CN115675289 B CN 115675289B CN 202211720123 A CN202211720123 A CN 202211720123A CN 115675289 B CN115675289 B CN 115675289B
Authority
CN
China
Prior art keywords
vehicle
driver
distance
area
column
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211720123.3A
Other languages
Chinese (zh)
Other versions
CN115675289A (en
Inventor
王源
黄志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xihua Technology Co Ltd
Original Assignee
Shenzhen Xihua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xihua Technology Co Ltd filed Critical Shenzhen Xihua Technology Co Ltd
Priority to CN202211720123.3A priority Critical patent/CN115675289B/en
Priority to CN202310447659.0A priority patent/CN116674468A/en
Publication of CN115675289A publication Critical patent/CN115675289A/en
Application granted granted Critical
Publication of CN115675289B publication Critical patent/CN115675289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/24Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle

Abstract

The embodiment of the application provides an image display method and device based on a driver visual field state in a driving scene. The method comprises the following steps: acquiring comprehensive state information of the vehicle; according to the method and the device for predicting the image display of the intelligent cockpit, the human eyes of a driver need to be observed and the target area is in a sheltered state according to the comprehensive state information, the left screen of the left A column of the vehicle and/or the right screen of the right A column of the vehicle are controlled according to the target area to display the image, and compared with the existing single scheme of fixedly displaying a fixed view range in the image display scene of the A column of the intelligent cockpit of the vehicle, the method and the device for predicting the image display of the intelligent cockpit of the vehicle are beneficial to improving the flexibility accuracy and the comprehensiveness of the image display of the automatic driving area controller, the driving safety of the vehicle is improved, and the driving experience of a user is improved.

Description

Image display method and device based on driver visual field state in driving scene
Technical Field
The application relates to the technical field of safe driving of automobiles, in particular to an image display method and device based on the visual field state of a driver in a driving scene.
Background
In recent years, traffic accidents caused by collision between automobiles and pedestrians have increased, and the traffic accidents can cause great injuries to the pedestrians. The A column blind area of the automobile has a large part of reasons for accidents, namely the A column is a connecting column for connecting the roof and the front cabin at the front left and right and mainly plays a role in supporting a windshield and the roof. In the actual driving process, the A column always blocks the sight of the driver to a certain extent to form a blind area, particularly in the turning process, the driver needs to swing the body and the head back and forth and left and right to adjust the visual field to overcome the blind area, and the blind area is inconvenient and unsafe.
In order to solve the problem of the dead zone of the automobile, a reflector or a lens is mostly adopted in the market at present, object information in the dead zone is acquired through refraction or reflection of light, or a picture shielded by an A column is acquired through a camera and is displayed on the A column.
However, because products such as a reflector or a lens are used, because the products are subjected to light refraction and reflection, the images obtained by a driver through the products are small, and some of the images are even mirror images, so that the reaction time of the driver is undoubtedly increased, and when the light is strong, the light is reflected to human eyes, so that the dazzling of the driver is easily caused, and the driving safety is influenced; further, use the camera to catch the picture that is sheltered from by the A post to on projecting corresponding A post, only have certain effect under the specific conditions such as barrier around the vehicle turns or the vehicle, if keep operating condition all the time at other times, do not have practical meaning, can influence the safety of driving on the contrary, and can not eliminate the potential safety hazard that the A post blind area brought for driving the in-process.
The mode of solving the car blind area problem of relative intelligence generally acquires external images through equipment such as external camera and projects to the screen, but generally speaking, the picture that shows on the screen is to a great extent directly the picture that the camera obtained, and does not consider travelling comfort and accuracy to the picture that catches through the camera also can have the difference with the picture that driver's eyes caught, thereby leads to the driver to receive the interference of screen on the contrary and leads to the problem to appear.
Therefore, how to safely eliminate the potential safety hazard brought to driving the automobile by the blind areas caused by the A columns on the two sides of the automobile is a problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image display method and device based on the visual field state of a driver in a driving scene, and compared with the existing single scheme of fixedly displaying a fixed viewing range in an A-column image display scene of an intelligent cabin of a vehicle, the method and device are beneficial to improving the flexibility accuracy and the comprehensiveness of image display of an automatic driving area controller, improving the driving safety of the vehicle and improving the driving experience of a user.
In a first aspect, an embodiment of the present application provides an image display method based on a driver's visual field state in a driving scene, where the method is applied to an automatic driving domain controller of a domain controller system of a vehicle, where the domain controller system includes the automatic driving domain controller and a vehicle body domain controller, and the automatic driving domain controller is in communication connection with the vehicle body domain controller; the method comprises the following steps:
acquiring comprehensive state information of the vehicle, wherein the comprehensive state information comprises at least one of the following: steering wheel angle, head posture of driver, and posture of passenger in assistant driver seat;
predicting a target area which needs to be observed by human eyes of the driver and is in a shielded state according to the comprehensive state information, wherein the target area comprises at least one of the following: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle;
according to the target area control the left side screen of the left side A post of vehicle and/or the right side screen of right side A post carry out image display, the left side screen is for setting up screen on the left side A post, the right side screen is for setting up screen on the right side A post.
In the current market, a common vehicle model is provided with an A column connected with a front engine compartment and a roof, a B column positioned between a front door and a rear door, and a C column connected with a trunk and the roof, wherein the A column, the B column and the C column shield part of the view of a driver in the process of driving the vehicle by the driver, but the B column and the C column generally have no potential safety hazard to the driver in the process of driving the vehicle by the driver because the common vehicle is provided with a rearview mirror and a backing image, but the A column is positioned in front of the vehicle, so that a certain potential safety hazard may be generated in the process of driving the vehicle by the driver.
The vehicle is generally the subject to which the method provided in the embodiments of the present application is applied, and specifically, the method is applied to an automatic driving domain controller of a domain controller system of the vehicle, the automatic driving domain controller is the execution subject of the method, since the domain controller system includes the automatic driving domain controller and a vehicle body domain controller, the vehicle body domain controller is generally used for controlling devices in the vehicle, the devices in the vehicle include devices external to the vehicle, and the two controllers work in a division manner, so that a driver of the vehicle can observe a blocked region in time.
In the method, it is emphasized in advance that the method determines whether the image is displayed on the screen of the left a column or the right a column or the a columns on two sides according to the observation sight of the driver, so that the screen is in a normally off state if the sight of the driver is not blocked in the process of driving the vehicle by the driver.
The method applied in the first aspect, first obtaining comprehensive state information of the vehicle, where the comprehensive state information may be obtained by the vehicle body domain controller, but a main body of the obtained information processing is the automatic driving domain controller;
secondly, a target area which needs to be observed by human eyes of the driver and is in a shielded state can be predicted through the comprehensive state information, and the target area is divided into three conditions; in case one, the left a pillar of the vehicle obstructs the view of the driver, then the target area is an area obstructed by the left a pillar of the vehicle and having a farthest distance between an area boundary and the vehicle smaller than a preset distance; in case two, the right a-pillar of the vehicle obstructs the view of the driver, then the target region is a region obstructed by the right a-pillar of the vehicle and the farthest distance between the region boundary and the vehicle is smaller than a preset distance; and in the third case, the passenger in the assistant driver seat obstructs the view of the driver for observing the right side rearview mirror, and then the target area is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
Based on the three situations, the key point of the method is how to judge the occurrence of the three situations and what image is correspondingly projected on the screen, so that the driver can drive safely without suffering from the trouble that the view is blocked.
Specifically, the automatic driving domain controller predicts and judges according to the acquired comprehensive state information from the vehicle body domain controller, which means that the acquired comprehensive state information can be used for judging the occurrence of the three conditions. Therefore, the automatic driving area controller determines the driver is shielded from sight by what through analyzing the comprehensive state information.
When it is determined that the driver is shielded from the sight by what, a target area may be determined, where the target area is closely related to the image displayed on the screen, for example, in a case where it is determined that the target area is an area that is shielded by the left a pillar of the vehicle and the farthest distance between the area boundary and the vehicle is less than a preset distance, an image of an area that is shielded by the left a pillar of the vehicle and the farthest distance between the area boundary and the vehicle is less than the preset distance is acquired by the vehicle body area controller, and the image is processed and displayed on the screen of the left a pillar. In the method, the subject for processing the image is the automatic driving area controller.
Therefore, the automatic driving area controller is more intelligent through the mutual matching of the automatic driving area controller and the vehicle body area controller, the area which needs to be observed by a driver and is shielded is determined at a proper time, the control screen displays the picture of the corresponding area, and the control screen is normally turned off at other time, so that the sight of the driver is not shielded, and the energy conservation and the environmental protection are ensured.
In yet another possible implementation of the first aspect, the general state information of the vehicle includes a driver head pose and a rider's pose in a passenger seat; the predicting of the target area which needs to be observed by human eyes of the driver and is in a sheltered state according to the comprehensive state information comprises the following steps:
according to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces a right side rearview mirror of the vehicle;
determining whether the sight of the driver observing the right side rearview mirror of the vehicle is shielded by the passenger in the copilot position according to the head posture of the driver and the posture of the passenger in the copilot position;
and if the sight of the driver observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat, determining that the target area which the eyes of the driver need to observe and is in the blocked state is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
Specifically, the screen can display other pictures besides the picture blocked by the A column; however, in the present embodiment, the other images include images displayed by the rear-view mirror on the side of the copilot, the screen of the right a-pillar is installed on the a-pillar on the copilot and faces the driving seat, and when it is detected that the line of sight of the driver looking at the rear-view mirror on the side of the copilot is blocked, the screen of the right a-pillar is controlled to display the image information of the side and the rear of the copilot, so as to replace the rear-view mirror on the copilot side, so that the driver can observe the image that the rear-view mirror should display; and outputting second prompt information so that the driver can timely reflect the condition of coming side or back to observe when neglecting the picture displayed on the screen of the right A column.
In a third case of the three cases, that is, when the passenger in the passenger compartment obstructs the view of the driver observing the right side mirror, and the target area is the viewing area corresponding to the image displayed on the right side mirror of the vehicle, the acquired comprehensive state information at least includes the head posture of the driver and the posture of the passenger in the passenger compartment.
And determining whether the sight line of the driver for observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat or not according to the head posture of the driver and the posture of the passenger in the front passenger seat. If the target area is blocked, the target area is a scene area corresponding to an image displayed by the right side rearview mirror of the vehicle, and the driver needs to observe the target area. The autopilot domain controller may be configured to analyze different information depending on the situation.
In yet another possible implementation manner of the first aspect, if the target area is a viewing area corresponding to an image displayed on a right side rearview mirror of the vehicle, the integrated state information of the vehicle includes the in-vehicle state information and the out-vehicle state information, and the in-vehicle state information includes a head posture of the driver and a posture of a passenger in a front passenger seat; the controlling the left screen of the left A column and/or the right screen of the right A column of the vehicle according to the target area to perform image display includes:
acquiring image data of at least one vehicle exterior camera in a scene area comprising a scene area corresponding to an image displayed by the right side rearview mirror in a plurality of vehicle exterior cameras of the vehicle;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror;
and displaying the target image on a right screen of the right A column.
In the above process, the target image adapted to the screen of the right a pillar is generated, and the automatic driving area controller is required to adjust parameters such as a frame rate and a viewing range of image data acquired by the external camera so as to generate the target image adapted to the right screen of the right a pillar.
Optionally, in the process of generating the target image adapted to the right screen of the right a-pillar, a standard meeting the observation habit of the driver is added to generate a picture capable of matching the observation habit of the driver, where the standard may be obtained by a model and meets the current habit of the driver, or may be self-set during generation.
Compared with the existing single scheme of curing and displaying a fixed viewing range, the automatic driving area controller is beneficial to improving the flexibility, accuracy and comprehensiveness of image display of the automatic driving area controller.
In yet another possible implementation manner of the first aspect, the generating a target image adapted to the right screen of the right a-pillar according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right a-pillar, and the imaging characteristics of the right rearview mirror includes:
if the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera;
if the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to the multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror.
In consideration of the difference of the devices controlled by the body area controllers of different vehicle types, the target images can be adapted to the screen and the observation habit of the driver, so that the target images are generated in different modes according to the difference of the number of the cameras outside the vehicle, and the comprehensiveness of the image display of the automatic driving area controller is further improved.
In yet another possible implementation of the first aspect, the integrated state includes at least one of a steering wheel angle or a driver head pose, the driver head pose including a driver head orientation; the predicting of the target area which needs to be observed by human eyes of the driver and is in a sheltered state according to the comprehensive state information comprises the following steps:
determining the pre-observation direction of the driver according to the comprehensive state information;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the scene area comprises a scene area which is shielded by a vehicle left side A column, and the target area is an area which is shielded by the vehicle left side A column and has the farthest distance between an area boundary and the vehicle smaller than a preset distance;
if the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to the right side A column of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the right side A column of the vehicle, and the target area is an area shielded by the right side A column of the vehicle, and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
The above process is a key to embodying the intelligence of the automatic driving area controller, for the driving comfort of the driver, the screens on the left a-pillar and the right a-pillar are normally in an off state, but after the target area is determined, corresponding images are displayed on the screens, but if the target area is to be determined, the pre-observation direction of the driver needs to be determined, and the pre-observation direction of the driver is determined according to the steering wheel angle and/or the head posture of the driver, for example, if the steering wheel angle of the driver changes, the trajectory of the vehicle changes, and then the driver naturally observes, which means that the angle of the steering wheel can represent the observation direction of the driver.
In yet another possible implementation manner of the first aspect, if the target area is an area that is blocked by a left a-pillar of the vehicle and a farthest distance between an area boundary and the vehicle is smaller than a preset distance, or an area that is blocked by a right a-pillar of the vehicle and a farthest distance between an area boundary and the vehicle is smaller than a preset distance, the integrated state information of the vehicle includes the in-vehicle state information and the out-of-vehicle state information, and the in-vehicle state information includes a head posture of a driver and a steering wheel angle; the controlling the left screen of the left A column and/or the right screen of the right A column of the vehicle according to the target area to perform image display includes:
determining the running state of the vehicle according to the comprehensive state information of the vehicle, wherein the running state at least comprises straight running or turning;
determining state information of a target object according to the vehicle external state information, wherein the vehicle external state information is image data outside the vehicle, which is acquired through at least one vehicle external camera, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area;
if the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is other vehicles, when the driving state of the vehicle is straight and the distance between the other vehicles and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the other vehicles on a screen on the same side with the other vehicles, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side with the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
Further, since the time for the driver to observe, think and operate the vehicle in a straight line or a turning line is different, and the set safety distance to the target object is different, the driving state of the vehicle is determined first, and the driving state includes a straight line or a turning line, and in other embodiments, the driving state may include other situations, such as a u-turn, a brake, and the like.
Furthermore, if the target object close to the vehicle is a pedestrian, different safe distances are set according to different driving states of the vehicle, when the vehicle travels straight, the speed of the vehicle is generally faster, and when the vehicle turns, the speed of the vehicle is generally slower, so that the driver can operate the vehicle in a relatively large time to avoid danger, and the set second distance is smaller than the first distance; similarly, the fourth distance is smaller than the third distance. In conclusion, different safe distances are set to deal with complex situations in the driving process of the vehicle, and the driving method is more reasonable and humanized.
In yet another possible implementation manner of the first aspect, before determining the state information of the target object according to the off-board state information, the method further includes:
obtaining a vehicle data set, wherein the vehicle data set comprises a historical driving state of the vehicle and first speed information, and the driving state at least comprises straight driving or turning;
generating the target object data set according to the vehicle external state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle driving state information, vehicle trajectory information, and third speed information;
inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
Specifically, the safe distance may be dynamically determined according to actual conditions of the vehicle during driving, and the safe distance in this embodiment includes the first distance, the second distance, the third distance, and the fourth distance, so that the safe distance of each target object is determined for subsequent operations before the driving state of the vehicle is determined.
Further, the safe distance is related to the actual driving/walking state of the vehicle and the target object, and therefore, first, related data of the vehicle is acquired, that is, a vehicle data set is acquired, the vehicle data set includes a historical driving state of the vehicle and first speed information for characterizing the current driving state of the vehicle from a certain point in time, and the driving state at least includes straight driving or turning; secondly, generating the target object data set according to the video information, wherein the target object data set comprises a pedestrian data set and/or a vehicle data set, the pedestrian data set comprises pedestrian form information and second speed information, the pedestrian form information is used for representing the motion state of the pedestrian at the current time point or time period, namely the pedestrian form information comprises height and orientation, and the vehicle data set comprises: and the vehicle running state information, the vehicle track information and the third speed information are used for representing the running states of the other vehicles at the current time point or time period.
Further, the target object data set and the vehicle data set are input into a safe distance prediction model, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, the safe distances belong to label data, and the safe distance prediction model is applied in various models and applied logics, for example, by integrating the relevant data of the vehicle and the target object into a driving track respectively, fitting the driving tracks of the vehicle and the target object to obtain a collision judgment result, and determining an appropriate safe distance according to the collision judgment result and the relevant data of the target object.
Optionally, the pedestrian data set further includes: a first weight and a second weight; the first weight is used for restricting the influence degree of the pedestrian form information on a corresponding safety distance result; the second weight is used for restricting the influence degree of the first speed information on the corresponding safety distance result.
Optionally, the vehicle data set further includes: a third weight, a fourth weight, and a fifth weight; the third weight is used for restricting the influence degree of the vehicle running state information on the corresponding safe distance result; the fourth weight is used for restraining the influence degree of the vehicle track information on a corresponding safe distance result; the fifth weight is used for restricting the influence degree of the third speed information on the corresponding safety distance result;
by the mode, the safety distance can be adjusted more flexibly, and the method is more targeted when dealing with more complex road conditions.
In yet another possible implementation manner of the first aspect, after determining the state information of the target object according to the off-board state information, the method further includes:
if the target object comprises a pedestrian and other vehicles, when the driving state of the vehicle is straight and the distance between any one object of the pedestrian and the other vehicles and the vehicle is smaller than or equal to the larger value of the first distance and the third distance, displaying a target image of a target area corresponding to the pedestrian on the screen, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe the screen corresponding to the target area;
if the target object comprises a pedestrian and other vehicles, when the driving state of the vehicle is turning and the distance between any one object of the pedestrian and the other vehicles and the vehicle is smaller than or equal to the larger value of the second distance and the fourth distance, displaying a target image of a target area corresponding to the pedestrian on a screen, and outputting first prompt information.
Specifically, on a road where some people and vehicles do not diverge or are narrow, a situation where a target object includes both a pedestrian and another vehicle is likely to occur, and in the face of such a complex situation, frequent prompts may cause interference to a driver, and therefore, when the driving state of the vehicle is straight, and the distance between any one object of the pedestrian and the other vehicle and the vehicle is less than or equal to the larger value of the first distance and the third distance, a picture of an area blocked by the a-pillar is displayed on a screen on a side of the a-pillar close to the driving place, and first prompt information is output; and when the driving state of the vehicle is turning and the distance between the pedestrian and any one of the other vehicles and the vehicle is smaller than or equal to the larger value of the second distance and the fourth distance, displaying a picture of an area shielded by the A pillar on a screen on one side of the A pillar close to the driving seat, and outputting first prompt information. Optionally, any one of the above situations occurs again during the period of outputting the first prompt message, and when the prompt message needs to be output again, the prompt message may be selected not to be output, and only the picture of the area shielded by the a-pillar displayed on the screen is extended, so that the driver is prevented from being interfered by frequent prompts, and accidents are avoided.
In yet another possible embodiment of the first aspect, the values of the third distance and the fourth distance are determined according to vehicle types of the other vehicles, the vehicle types of the other vehicles at least including bicycles, electric bicycles, cars, trucks, buses, trailers, non-complete vehicles, motorcycles, tractors or special vehicles, the special vehicles including ambulances, police cars or fire trucks.
Because the types of vehicles running on the current road are numerous, but the common vehicle conditions on different cities and different roads are different, the number of electric bicycles running on the road of some cities is large, the number of cars is large, and the number of trucks on some roads is large, the third distance and the fourth distance related to other vehicles in the target object can be changed according to the change of the types of the vehicles aiming at different types of the vehicles; it is worth explaining that the safety distance between the other vehicles of which the vehicle types are special vehicles and the vehicle is the largest, optionally, when the other vehicles are identified to be special vehicles, special prompt information is output to a driver, so that the vehicle not only accords with traffic regulations, but also can embody humanistic feelings.
In yet another possible implementation manner of the first aspect, after the screen includes a screen of a right a-pillar provided on an a-pillar on a passenger side, and the screen of the right a-pillar faces a driving seat, and when a distance between the target object and the vehicle is less than or equal to a preset distance, a picture of an area blocked by the a-pillar is displayed on the screen of the a-pillar on a side close to the driving seat, and the first prompt information is output, the method further includes:
and if the fact that the sight of the driver watching the rearview mirror on the side of the copilot is blocked is detected, controlling the screen of the right column A to display the picture information of the side and the rear of the copilot, and outputting second prompt information, wherein the second prompt information is used for indicating the driver to watch the road condition pictures of the side and the rear of the copilot of the vehicle.
In yet another possible implementation manner of the first aspect, after displaying a picture of an area blocked by the a-pillar on-screen and outputting first prompt information when a distance between the target object and the vehicle is less than or equal to a preset distance, the method further includes:
and outputting third prompt information, wherein the third prompt information is used for prompting a distance value between the vehicle and the target object.
Specifically, the third prompt information may be displayed through the screen or output through voice broadcast, so that the driver can perform appropriate operations according to the actual distance.
In a further possible implementation manner of the first aspect, the vehicle includes a vehicle data recorder which is located in front of the vehicle, and after the displaying a picture of an area blocked by the a-pillar on the screen of the a-pillar and outputting the first prompt information when the distance between the target object and the vehicle is less than or equal to a preset distance, the method further includes:
and adjusting the shooting direction of the automobile data recorder so that the automobile data recorder shoots the picture related to the target object.
Generally, a drive recorder can only capture a picture of the vehicle right in front of the vehicle, and in an actual scene, an accident that a driver cannot observe an a-pillar blind area may occur, but the drive recorder may not record the actual situation of the accident due to an angle. Therefore, in order to cope with the situation that no video is recorded when the vehicle is in a safety accident, after the distance between the target object and the vehicle is smaller than or equal to a preset distance, the shooting direction of the automobile data recorder is adjusted so that the automobile data recorder shoots the picture related to the target object, and optionally, the angle of the shooting direction of the automobile data recorder is adjusted to be 10 degrees, so that the automobile data recorder can shoot the picture related to the target object and most of the pictures in front of the vehicle.
In an optional embodiment, according to a collision prediction result between the target object and the vehicle, adjusting a shooting direction of the automobile data recorder so that the automobile data recorder shoots a picture related to the target object;
specifically, a vehicle data set is obtained, wherein the vehicle data set comprises historical driving states and first speed information of the vehicle, and the driving states at least comprise straight driving or turning;
generating the target object data set according to the vehicle exterior state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle driving state information, vehicle trajectory information, and third speed information;
inputting the target object data set and the vehicle data set into a collision prediction model to obtain a collision prediction result of the target object and the vehicle, wherein the collision prediction result comprises a complete collision, a possible collision or an impossible collision, the complete collision is to determine that the vehicle and the target object will collide, the possible collision is to determine that the vehicle and the target object will collide if a driver does not make an appropriate operation, the vehicle and the target object will collide, the non-collision is to determine that the vehicle and the target object will not collide if the driver does not make any operation of changing the driving state and speed of the vehicle, the collision prediction model is a model trained according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding collision prediction results, the target object data set and the vehicle data set belong to characteristic data, and the collision prediction result belongs to tag data.
And if the collision prediction result is complete collision or possible collision, adjusting the shooting direction of the automobile data recorder so that the automobile data recorder shoots the picture related to the target object.
By the method, the automobile data recorder can make adaptive adjustment according to the method provided by one or more embodiments, and the experience of a driver is improved.
In a second aspect, an embodiment of the present application provides an image display device based on a driver's view state in a driving scene, where the device at least includes an acquisition unit, a prediction unit, and a display unit. The image display device based on the driver's view state in the driving scene is used for implementing the method described in any one of the embodiments of the first aspect, wherein the acquisition unit, the prediction unit and the display unit are introduced as follows:
an acquisition unit configured to acquire integrated status information of the vehicle, the integrated status information including at least one of: steering wheel angle, head attitude of driver, and attitude of passenger in copilot;
the prediction unit is used for predicting a target area which needs to be observed by human eyes of the driver and is in a blocked state according to the comprehensive state information, and the target area comprises at least one of the following: the system comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a scene area corresponding to an image displayed by a right rearview mirror of the vehicle;
and the display unit is used for controlling the left screen of the left A column and/or the right screen of the right A column of the vehicle to display images according to the target area, the left screen is arranged on the left A column, and the right screen is arranged on the right A column.
In the current market, a column A connecting a front engine compartment and a roof, a column B positioned between a front door and a rear door, and a column C connecting a trunk and the roof are arranged in common vehicle models, wherein the column A, the column B and the column C shield part of the view of a driver in the process of driving the vehicle by the driver, but because a rearview mirror and a backing image are arranged on a common vehicle, the column B and the column C generally have no potential safety hazard in the process of driving the vehicle by the driver, but because the column A is positioned in front of the vehicle, certain potential safety hazard may be generated in the process of driving the vehicle by the driver.
The vehicle is generally a main body applied to the method provided by the embodiment of the application, specifically, the method is applied to an automatic driving domain controller of a domain controller system of the vehicle, the automatic driving domain controller is an execution main body of the method, the domain controller system comprises the automatic driving domain controller and a vehicle body domain controller, the vehicle body domain controller is generally used for controlling equipment in the vehicle, the equipment in the vehicle comprises vehicle external equipment, and the two controllers are in work division and cooperation, so that a driver of the vehicle can observe the shielded area in time.
In the method, it is emphasized in advance that the method determines whether the image is displayed on the screen of the left a column or the right a column or the a columns on two sides according to the observation sight of the driver, so that the screen is in a normally off state if the sight of the driver is not blocked in the process of driving the vehicle by the driver.
The method applied in the first aspect, first obtaining comprehensive state information of the vehicle, where the comprehensive state information may be obtained by the vehicle body domain controller, but a main body of the obtained information processing is the automatic driving domain controller;
secondly, a target area which needs to be observed by eyes of the driver and is in a shielded state can be predicted through the comprehensive state information, and the target area is divided into three conditions; in case one, the left a pillar of the vehicle obstructs the view of the driver, then the target area is an area obstructed by the left a pillar of the vehicle and having a farthest distance between an area boundary and the vehicle smaller than a preset distance; in case two, the view of the driver is blocked by the vehicle right side a pillar, and then the target area is an area which is blocked by the vehicle right side a pillar and the farthest distance between the area boundary and the vehicle is less than a preset distance; and in the third case, the passenger in the assistant driver seat obstructs the view of the driver for observing the right side rearview mirror, and then the target area is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
Based on the three situations, the key point of the method is how to judge the occurrence of the three situations and what image is correspondingly projected on the screen, so that the driver can drive safely without suffering from the trouble that the view is blocked.
Specifically, the automatic driving area controller predicts and judges according to the acquired comprehensive state information from the vehicle body area controller, which means that the acquired comprehensive state information can be used for judging the occurrence of the three conditions. It follows that the autopilot domain controller determines what the driver is obscured from view by analyzing the integrated status information.
When it is determined what the driver is blocking the line of sight, a target area may be determined, and the target area is closely related to the image displayed on the screen, for example, in a case where the target area is determined to be an area that is blocked by the left a pillar of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than a preset distance, an image at an area that is blocked by the left a pillar of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance is acquired by the vehicle body area controller, and the image is processed and displayed on the screen of the left a pillar. In the method, the subject for processing the image is the automatic driving area controller.
Therefore, the automatic driving area controller is more intelligent through the mutual matching of the automatic driving area controller and the vehicle body area controller, the area which needs to be observed by a driver and is shielded is determined at a proper time, the control screen displays the picture of the corresponding area, and the control screen is normally turned off at other time, so that the sight of the driver is not shielded, and the energy conservation and the environmental protection are ensured.
In a further possible implementation manner of the second aspect, the prediction unit is specifically configured to:
according to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to a right side rearview mirror of the vehicle;
determining whether the sight of the driver observing the right side rearview mirror of the vehicle is shielded by the passenger in the copilot position according to the head posture of the driver and the posture of the passenger in the copilot position;
and if the sight of the driver observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat, determining that the target area which the eyes of the driver need to observe and is in the blocked state is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
Specifically, the screen can display other pictures besides the picture blocked by the A column; however, in the present embodiment, the other images include images displayed by the rear-view mirror on the side of the copilot, the screen of the right a-pillar is installed on the a-pillar on the copilot and faces the driving seat, and when it is detected that the line of sight of the driver looking at the rear-view mirror on the side of the copilot is blocked, the screen of the right a-pillar is controlled to display the image information of the side and the rear of the copilot, so as to replace the rear-view mirror on the copilot side, so that the driver can observe the image that the rear-view mirror should display; and outputting second prompt information so that the driver can timely reflect that the side or the rear condition needs to be observed when neglecting the picture displayed on the screen of the right A column.
In a third case of the three cases, that is, when the passenger in the passenger compartment obstructs the view of the driver observing the right side mirror, and the target area is the viewing area corresponding to the image displayed on the right side mirror of the vehicle, the acquired comprehensive state information at least includes the head posture of the driver and the posture of the passenger in the passenger compartment.
And determining whether the sight line of the driver for observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat or not according to the head posture of the driver and the posture of the passenger in the front passenger seat. If the target area is blocked, the target area represents that the driver needs to observe, but the blocked target area is a scene area corresponding to an image displayed by the right side rearview mirror of the vehicle. The autopilot domain controller may be configured to analyze different information depending on the situation.
In a further possible implementation manner of the second aspect, the display unit is specifically configured to:
acquiring image data of at least one vehicle exterior camera in a scene area comprising a scene area corresponding to an image displayed by the right side rearview mirror in a plurality of vehicle exterior cameras of the vehicle;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror;
and displaying the target image on a right screen of the right A column.
In the above process, the target image adapted to the screen of the right a-pillar is generated, and the automatic driving area controller is required to adjust parameters such as a frame rate and a viewing range of image data acquired by the vehicle exterior camera so as to generate the target image adapted to the right screen of the right a-pillar.
Optionally, in the process of generating the target image adapted to the right screen of the right a-pillar, a standard meeting the observation habit of the driver is added to generate a picture capable of matching the observation habit of the driver, where the standard may be obtained by a model and meets the current habit of the driver, or may be self-set during generation.
Compared with the existing single scheme of curing and displaying a fixed viewing range, the automatic driving area controller is beneficial to improving the flexibility, accuracy and comprehensiveness of image display of the automatic driving area controller.
In yet another possible implementation manner of the second aspect, in generating the target image adapted to the right screen of the right a-pillar according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right a-pillar, and the imaging characteristics of the right rearview mirror, the display unit is further configured to:
if the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera;
if the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to the multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror.
In consideration of the difference of the devices controlled by the body area controllers of different vehicle types, the target images can be adapted to the screen and the observation habit of the driver, so that the target images are generated in different ways according to the difference of the number of the cameras outside the vehicle, and the comprehensiveness of the image display of the automatic driving area controller is further improved.
In a further possible implementation manner of the second aspect, the prediction unit is specifically configured to:
determining the pre-observation direction of the driver according to the comprehensive state information;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by a vehicle left side A column, and the target area is an area shielded by the vehicle left side A column and the farthest distance between an area boundary and the vehicle is less than a preset distance;
if the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to the right side A column of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the right side A column of the vehicle, and the target area is an area shielded by the right side A column of the vehicle, and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
The above process is a key to embodying the intelligence of the automatic driving area controller, for the driving comfort of the driver, the screens on the left a-pillar and the right a-pillar are normally in an off state, but after the target area is determined, corresponding images are displayed on the screens, but if the target area is to be determined, the pre-observation direction of the driver needs to be determined, and the pre-observation direction of the driver is determined according to the steering wheel angle and/or the head posture of the driver, for example, if the steering wheel angle of the driver changes, the trajectory of the vehicle changes, and then the driver naturally observes, which means that the angle of the steering wheel can represent the observation direction of the driver.
In a further possible implementation manner of the second aspect, the display unit is specifically configured to:
determining the running state of the vehicle according to the comprehensive state information of the vehicle, wherein the running state at least comprises straight running or turning;
determining state information of a target object according to the vehicle external state information, wherein the vehicle external state information is image data outside the vehicle, which is acquired through at least one vehicle external camera, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area;
if the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is other vehicles, when the driving state of the vehicle is straight and the distance between the other vehicles and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the other vehicles on a screen on the same side with the other vehicles, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side with the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
Further, since the time for the driver to observe, think and operate the vehicle in a straight line or a turning line is different, and the set safety distance to the target object is different, the driving state of the vehicle is determined first, and the driving state includes a straight line or a turning line, and in other embodiments, the driving state may include other situations, such as a u-turn, a brake, and the like.
Furthermore, if the target object close to the vehicle is a pedestrian, different safe distances are set according to different driving states of the vehicle, when the vehicle travels straight, the speed of the vehicle is generally faster, and when the vehicle turns, the speed of the vehicle is generally slower, so that the driver can operate the vehicle in a relatively large time to avoid danger, and the set second distance is smaller than the first distance; similarly, the fourth distance is smaller than the third distance. In conclusion, different safe distances are set to deal with complex situations in the driving process of the vehicle, and the driving method is more reasonable and humanized.
In yet another possible implementation manner of the second aspect, the display unit is further configured to:
obtaining a vehicle data set, wherein the vehicle data set comprises a historical driving state of the vehicle and first speed information, and the driving state at least comprises straight driving or turning;
generating the target object data set according to the vehicle exterior state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle driving state information, vehicle trajectory information, and third speed information;
inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
Specifically, the safe distance may be dynamically determined according to the actual situation of the vehicle during the driving process, and the safe distance in this embodiment includes the first distance, the second distance, the third distance and the fourth distance, so that the safe distance of each target object is determined for the subsequent operation before the driving state of the vehicle is determined.
Further, the safe distance is related to the actual driving/traveling state of the vehicle and the target object, and therefore, first, related data of the vehicle, that is, a vehicle data set including a historical driving state of the vehicle and first speed information for characterizing a current driving state of the vehicle from a certain time point is acquired, the driving state at least including straight driving or turning; secondly, generating the target object data set according to the video information, wherein the target object data set comprises a pedestrian data set and/or a vehicle data set, the pedestrian data set comprises pedestrian form information and second speed information, the pedestrian form information is used for representing the motion state of the pedestrian at the current time point or time period, namely the pedestrian form information comprises height and orientation, and the vehicle data set comprises: and the vehicle running state information, the vehicle track information and the third speed information are used for representing the running states of the other vehicles at the current time point or time period.
Further, the target object data set and the vehicle data set are input into a safe distance prediction model, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, the safe distances belong to label data, and the safe distance prediction model is applied in various models and applied logics, for example, by integrating the relevant data of the vehicle and the target object into a driving track respectively, fitting the driving tracks of the vehicle and the target object to obtain a collision judgment result, and determining an appropriate safe distance according to the collision judgment result and the relevant data of the target object.
Optionally, the pedestrian data set further includes: a first weight and a second weight; the first weight is used for restricting the influence degree of the pedestrian form information on a corresponding safety distance result; the second weight is used for restricting the influence degree of the first speed information on the corresponding safety distance result.
Optionally, the vehicle data set further includes: a third weight, a fourth weight, and a fifth weight; the third weight is used for restricting the influence degree of the vehicle running state information on the corresponding safe distance result; the fourth weight is used for restraining the influence degree of the vehicle track information on a corresponding safe distance result; the fifth weight is used for restricting the influence degree of the third speed information on the corresponding safety distance result;
by the mode, the safety distance can be adjusted more flexibly, and the method is more targeted when dealing with more complex road conditions.
In a third aspect, embodiments of the present application provide a vehicle comprising an autonomous driving domain controller, a memory, and a computer program stored on the memory and operable on the autonomous driving domain controller; the autonomous driving area controller, when executing the computer program, may be adapted to perform the method as described in the first aspect or any one of the possible implementations of the first aspect.
It should be noted that the automatic driving area controller included in the vehicle described in the third aspect may be an automatic driving area controller dedicated to executing the methods (referred to as a dedicated automatic driving area controller for convenience), or may be an automatic driving area controller that executes the methods by calling a computer program, such as a general automatic driving area controller. Optionally, the at least one autopilot domain controller may also include both dedicated autopilot domain controllers and general autopilot domain controllers.
Alternatively, the computer program may be stored in a memory. For example, the Memory may be a non-transient (non-transient) Memory, such as a Read Only Memory (ROM), which may be integrated with the autopilot controller on the same device, or may be separately disposed on different devices, and the embodiment of the present application does not limit the type of the Memory and the arrangement manner of the Memory and the autopilot controller.
In one possible embodiment, the at least one memory is located outside the vehicle.
In yet another possible embodiment, the at least one memory is located within the vehicle.
In yet another possible embodiment, a portion of the at least one memory is located inside the vehicle and another portion of the memory is located outside the vehicle.
In the present application, it is also possible that the autonomous driving domain controller and the memory are integrated in one device, i.e. the autonomous driving domain controller and the memory may also be integrated together.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, which when executed on at least one autopilot domain controller, implements the method described in the first aspect or any one of the alternatives of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program for implementing the method as described in the first aspect or in any alternative thereto, when the program is run on at least one automatic driving domain controller.
Alternatively, the computer program product may be a software installation package, which may be downloaded and executed on a computing device in case it is desired to use the method described above.
The advantageous effects of the technical solutions provided in the third to fifth aspects of the present application may refer to the advantageous effects of the technical solutions of the first and second aspects, and are not described herein again.
Drawings
The drawings that are required to be used in the description of the embodiments will now be briefly described.
FIG. 1 is a schematic view of a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a domain controller system of a vehicle according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image displaying method based on a driver's view state in a driving scene according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image display device based on a driver's view state in a driving scene according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The system architecture applied in the embodiments of the present application is described below. It should be noted that the system architecture and the service scenario described in the present application are for more clearly illustrating the technical solution of the present application, and do not constitute a limitation to the technical solution provided in the present application, and as a person having ordinary skill in the art knows, along with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the present application is also applicable to similar technical problems.
Referring to fig. 1, fig. 1 is a schematic view of a vehicle according to an embodiment of the present application, where:
the vehicle 10 may be a vehicle-mounted device, or may be a vehicle itself such as an automobile, a truck, a passenger car, a trailer, or an incomplete vehicle. Optionally, the vehicle 10 may be provided with an image acquiring device such as a camera and an industrial camera, where the image acquiring device is installed on a pillar a on both sides of the vehicle 10 and faces the outside of the vehicle, the vehicle 10 acquires image information of an area blocked by the pillar a through the image acquiring device, accordingly, the vehicle 10 at least includes a left screen 101 of the left pillar a and a right screen 102 of the right pillar a, the vehicle 10 displays a picture of the area blocked by the pillar a through the left screen 101 of the left pillar a and the right screen 102 of the right pillar a, the left screen 101 is located on the pillar a on the driving position side of the vehicle, and the right screen 102 is located on the pillar a on the co-driving position side of the vehicle. Optionally, the right screen 102 of the right a-pillar may display pictures of other areas, for example, the image capturing device may also be mounted on the rearview mirror, so that the right screen 102 of the right a-pillar displays pictures of the side and the rear of the vehicle 10, that is, corresponding pictures displayed by the right rearview mirror; the vehicle 10 is provided with information capturing equipment such as a vehicle-mounted radar, an infrared sensor, a speed sensor, a distance sensor and the like; with the above-described information capturing device, the vehicle 10 can acquire related information of a target object/the vehicle 10 itself, the target object including a pedestrian or another vehicle; optionally, the vehicle 10 is provided with an audio device, which may be installed inside the left screen 101 of the left a-pillar and the right screen 102 of the right a-pillar, or may be installed outside the left screen 101 of the left a-pillar and the right screen 102 of the right a-pillar, inside the vehicle 10, and used for playing the prompt message in some scenes.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a domain controller system of a vehicle provided in an embodiment of the present application, where the domain controller system of the vehicle includes an automatic driving domain controller 201 and a vehicle body domain controller 202, and an execution subject of the question-answer recall method provided in the embodiment of the present application is the automatic driving domain controller 201 with information processing capability; the autopilot domain controller 201 establishes a communication connection with a body domain controller 202, and the body domain controller 202 is configured to control devices in the vehicle, such as an in-vehicle camera, an out-vehicle camera, or a screen.
The automatic driving range controller 201 includes: a first interaction layer 2011, a handle layer 2012, and a second interaction layer 2013. The first interaction layer 2011 is mainly used for performing information interaction with the body area controller 202, for example, sending some required information requests and receiving some required information, where the information includes information such as comprehensive state information of the vehicle and image data acquired by the camera. The processing layer 2012 is mainly configured to process information data such as the integrated status information or the image data, for example, the processing layer 2012 predicts a target area by processing the integrated status information; the target image is obtained by processing the image data and other data. The second interaction layer 2013 is mainly configured to send a screen control request to the body area controller 202 to control the screen to display the target image.
The above-mentioned automatic driving area controller 201 and the body area controller 202 may be integrated in the vehicle, or may be integrated outside the vehicle, or may be disposed in the vehicle, or disposed outside the vehicle, which is not limited in this embodiment of the present application.
Referring to fig. 3, fig. 3 is a schematic flow diagram of an image display method based on a driver's view state in a driving scene according to an embodiment of the present application, and is an automatic driving domain controller applied to a domain controller system of a vehicle, where the domain controller system includes the automatic driving domain controller and a vehicle body domain controller, and the automatic driving domain controller is in communication connection with the vehicle body domain controller, and the method may be implemented based on the system architecture diagram shown in fig. 2, and may also be implemented based on other architectures, and the method includes, but is not limited to, the following steps:
step S301: and acquiring comprehensive state information of the vehicle.
The integrated state information is obtained through the vehicle body domain controller, for example, the automatic driving domain controller sends an obtaining request of the integrated state information to the vehicle body domain controller, and then receives the integrated state information sent by the vehicle body domain controller. The integrated state information is used for predicting a target area which is required to be observed by human eyes of a driver of the vehicle and is in an occluded state.
The integrated status information includes at least one of: steering wheel angle, driver head attitude, and passenger attitude at the copilot.
In an alternative embodiment, the general state information includes only a steering wheel angle on the way of the vehicle normally running on the road, and it is determined whether the vehicle is in a turning or turning state by the steering wheel angle, so as to predict the region that the human eyes of the driver need to observe.
In another alternative embodiment, the general state information only includes the head pose of the driver, and the direction or the area that the human eyes of the driver need to observe is determined according to the head pose of the driver, and whether the area that the driver needs to observe is blocked is determined.
In another optional embodiment, the comprehensive state information comprises a head posture of the driver and a posture of the passenger in the front seat, and whether the sight line of the driver for observing the right side rearview mirror is shielded by the passenger in the front seat is determined according to the head posture of the driver and the posture of the passenger in the front seat.
In another optional embodiment, since the above situation cannot be predicted, the acquired comprehensive state information includes a steering wheel angle, a head posture of the driver, and a posture of a passenger in the passenger seat, and first, whether a driving route of the vehicle has shifted is determined according to the steering wheel angle, and if so, whether an area needing to be observed is blocked is determined according to the shifted direction; if not, determining whether the driver is ready to observe the two sides of the vehicle or not according to the head posture of the driver; if the head posture of the driver is deviated, determining that the driver is ready to observe two sides of the vehicle, and determining the observation direction; and if the head posture of the driver deviates to the right, determining whether the target area which needs to be observed by the eyes of the driver and is in a shielded state is the right side rearview mirror of the vehicle or not according to the posture of the passenger at the assistant driving position.
Step S302: and predicting a target area which needs to be observed by human eyes of the driver and is in a shielded state according to the comprehensive state information.
The target area includes at least one of: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle.
In this embodiment, the target area includes the following three cases:
the first condition is as follows: the target area is a viewing area corresponding to an image displayed by the right side rearview mirror of the vehicle.
In case one, the general state information includes at least the driver head pose and the pose of the passenger in the passenger seat.
According to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to a right side rearview mirror of the vehicle; the head pose of the driver can be obtained according to a camera inside the vehicle.
Determining whether the sight of the driver for observing the right side rearview mirror of the vehicle is shielded by the passenger in the front seat according to the head posture of the driver and the posture of the passenger in the front seat; the posture of the passenger in the copilot can be obtained according to a camera in the vehicle; specifically, the sight line direction of the driver is determined according to the head posture of the driver, and if the sight line direction of the driver is the direction for observing the right side rearview mirror, whether the posture of the passenger at the copilot position obstructs the driver from observing the right side rearview mirror is determined.
And if the sight of the driver for observing the right side rearview mirror of the vehicle is shielded by the passenger in the front passenger seat, determining that the target area which is required to be observed by the eyes of the driver and is in the shielded state is a scene area corresponding to the image displayed by the right side rearview mirror of the vehicle.
In the first case, a scene corresponding to the image displayed on the right side rearview mirror of the vehicle which is shielded is displayed on the right side screen.
And a second condition: the target area is an area which is shielded by the left A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than a preset distance.
The integrated state includes at least one of a steering wheel angle or a driver head pose, the driver head pose including a driver head orientation;
determining the pre-observation direction of the driver according to the comprehensive state information, wherein the pre-observation direction is a direction to be observed by the driver, for example, if the head orientation of the driver is shifting to the right, the pre-observation direction of the driver is the right side;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the vehicle left side A column, and the target area is an area shielded by the vehicle left side A column and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
It should be noted that the preset distance is determined according to the image data, optionally, the preset distance is determined according to other vehicles or pedestrians in the image data, and optionally, the preset distance is determined according to the driving postures and/or speeds of other vehicles or pedestrians in the image data.
Case three: the target area is an area which is covered by the column A on the right side of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than a preset distance.
If the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to the right side A column of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the right side A column of the vehicle, and the target area is an area shielded by the right side A column of the vehicle, and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
In an optional implementation manner, if the pre-observation direction of the driver is the left side/right side of the vehicle, the target area is directly determined to be an area which is blocked by the left side/right side a pillar of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than a preset distance, and the preset distance is preset.
Step S303: and controlling a left screen of a left A column and/or a right screen of a right A column of the vehicle to display images according to the target area.
The left side screen is setting up screen on the A post of left side, the right side screen is setting up screen on the A post of right side.
Before image display is performed, acquiring image data of a corresponding target area through an external camera, taking the above case as an example, in an optional embodiment, acquiring image data of at least one external camera of a plurality of external cameras of the vehicle, wherein a viewing area comprises a viewing area corresponding to an image displayed by the right side rearview mirror;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror; optionally, the right screen is vertically flat and long, and then an image with the imaging characteristic of the side length and the width of the right rearview mirror as a standard is displayed on the right screen by combining the imaging characteristic of the side length and the width of the right rearview mirror, and in this situation, partial black edges exist on the upper part and/or the lower part of the right screen; in order to present a larger view to the driver behind the right side of the vehicle, optionally, a target image adapted to the vertically prolate form of the right side screen is directly generated from the image data, in which case the right side screen does not have a black edge. Optionally, second prompt information is output, where the second prompt information is used to instruct a driver of the vehicle to observe a road condition picture at a side and a rear of a co-driver side of the vehicle.
If the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera; optionally, the exterior camera is mounted on the right side rearview mirror.
If the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror. Optionally, the camera outside the vehicle includes the camera of installing in the A post outside, and the direction that the camera of installing in the A post outside obtained the image is the same line with the sight direction when the driver who sits at the driving position observes the A post, makes the picture that obtains sheltering from the region can just in time fill up driver's field of vision blind area.
In an optional embodiment, if there is a pedestrian or another vehicle in an area blocked by the left a pillar or the right a pillar of the vehicle, then even if the driver does not observe the direction, the corresponding image is also displayed on the screen on the same side as the pedestrian or another vehicle, as follows:
determining a driving state of the vehicle according to the comprehensive state information of the vehicle, wherein the driving state at least comprises straight driving or turning; the integrated state information of the vehicle includes the in-vehicle state information and the out-vehicle state information, and the in-vehicle state information includes a driver's head posture and a steering wheel angle.
Determining state information of a target object according to the vehicle external state information, wherein the vehicle external state information is image data outside the vehicle, which is acquired through at least one vehicle external camera, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side of the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area; in an optional implementation manner, a third prompt message is output, where the third prompt message is used to prompt a distance value between the vehicle and the target object, and in this embodiment, a real-time distance value between the target object and the vehicle is displayed on the screen, so that a driver of the vehicle can perform an appropriate operation according to an actual distance.
Further, the preset distance is a preset safe distance, and the safe distance represents that the driver of the vehicle needs to operate in time when the distance between the target object and the vehicle reaches the distance value.
If the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is another vehicle, when the driving state of the vehicle is straight and the distance between the another vehicle and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side of the another vehicle, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side with the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
Inevitably, in some road sections with complex road conditions, the target object may include both a pedestrian and another vehicle, and therefore, in an optional embodiment, if the target object includes a pedestrian and another vehicle, when the driving state of the vehicle is straight, and the distance between any one object of the pedestrian and the another vehicle and the vehicle is less than or equal to the larger value of the first distance and the third distance, a picture of an area blocked by the a-pillar is displayed on a corresponding screen, and first prompt information is output;
if the target object comprises a pedestrian and other vehicles, when the driving state of the vehicle is turning and the distance between any one object of the pedestrian and the other vehicles and the vehicle is smaller than or equal to the larger value of the second distance and the fourth distance, displaying a picture of an area shielded by the A column on a corresponding screen, and outputting first prompt information.
In an optional implementation manner, a preset distance between the target object and the vehicle is generated in real time according to a state of the target object, and the preset distance is a safe distance, which is specifically as follows:
obtaining a vehicle data set, wherein the vehicle data set comprises a historical driving state of the vehicle and first speed information, and the driving state at least comprises straight driving or turning;
generating the target object data set according to the vehicle exterior state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle running state information, vehicle track information and third speed information; the pedestrian form information is used for analyzing the travel track of the pedestrian, and the orientation of the pedestrian can be obtained according to the orientation of the face and the orientation of the body feature of the pedestrian; the vehicle running state information is running state information corresponding to other vehicles in the target object, and comprises straight running or turning, and optionally, the running state information also comprises turning around or stopping; the vehicle trajectory information may be calculated according to the vehicle driving state information and the third speed information, and is used to simulate the traveling route of the other vehicle to determine whether the other vehicle and the vehicle are overlapped in the predicted trajectory, optionally, if the other vehicle and the vehicle are overlapped, the possibility that the other vehicle collides with the vehicle is represented, and the corresponding safe distance is relatively increased.
Optionally, the pedestrian data set further includes: a first weight and a second weight; the first weight is used for restricting the influence degree of the pedestrian form information on a corresponding safety distance result; the second weight is used for restricting the influence degree of the first speed information on the corresponding safety distance result.
Optionally, the vehicle data set further includes: a third weight, a fourth weight, and a fifth weight; the third weight is used for restricting the influence degree of the vehicle running state information on the corresponding safe distance result; the fourth weight is used for restraining the influence degree of the vehicle track information on a corresponding safe distance result; the fifth weight is used for restricting the influence degree of the third speed information on the corresponding safety distance result.
Inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
It should be noted that the safe distance prediction model may be pre-installed inside a vehicle, and the process of obtaining the safe distance occurs inside the vehicle, the safe distance prediction model may also be installed in a server outside the vehicle, the vehicle sends the target object data set and the vehicle data set to the server by establishing a communication connection with the server, the server inputs the target object data set and the vehicle data set into the pre-trained safe distance prediction model to obtain the safe distance between the target object and the vehicle, and the vehicle receives the safe distance sent from the server.
And after the safe distance of the target object is determined, determining whether to display a corresponding picture on a screen and prompt a driver according to the driving state of the vehicle.
In addition to being derived from the analysis of the relevant data set, the third distance and the fourth distance may also be derived from other algorithms, and in an alternative embodiment, the values of the third distance and the fourth distance are determined according to models of other vehicles, including at least a bicycle, an electric bicycle, a car, a truck, a passenger car, a trailer, a non-complete vehicle, a motorcycle, a tractor or a special vehicle, including an ambulance, a police vehicle or a fire truck. It should be noted that the safety distance between the other vehicle of which the vehicle type is the special vehicle and the vehicle is the largest, optionally, when the other vehicle is identified as the special vehicle, special prompt information is output to the driver, so that the driver can avoid the special vehicle in time.
Through the above process, safety accidents caused by the fact that the a column obstructs the view of the driver can be well avoided, but other devices of the vehicle can also be adaptively adjusted according to the method, in an optional implementation manner, the shooting direction of the automobile data recorder is adjusted, so that the automobile data recorder shoots the picture related to the target object, and the situation that video recording is not available when the safety accidents related to the target object occur is met, specifically, after the distance between the target object and the vehicle is smaller than or equal to the preset distance, the shooting angle of the automobile data recorder is adjusted by 10 degrees in the direction of the target object, so that the automobile data recorder can shoot the picture related to the target object and can shoot most of pictures in front of the vehicle.
In conclusion, the method provided by the embodiment is beneficial to improving the flexibility, accuracy and comprehensiveness of the automatic driving area controller in image display, improving the driving safety of the vehicle and improving the driving experience of the user for the existing single scheme of curing and displaying the fixed viewing range.
And the safe distance is set to give enough reaction time to the vehicle driver, a corresponding picture shielded by the A column is displayed on a screen, the actual condition of the shielded area is informed to the driver, prompt information is output, the situation that the driver ignores the area shielded by the A column due to driving habits or other factors is avoided, a complete set of logic is formed to inform the vehicle driver, the potential safety hazard exists in the area shielded by the A column, and the occurrence of safety accidents caused by the fact that the A column shields the view of the driver is avoided.
The method of the embodiments of the present application is explained in detail above, and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image display device 40 based on a driver's view state in a driving scene, where the image display device 40 based on the driver's view state in the driving scene may be a device in the aforementioned vehicle, and the image display device 40 based on the driver's view state in the driving scene may include an acquisition unit 401, a prediction unit 402, and a display unit 403, where details of each unit are described below.
An obtaining unit 401, configured to obtain comprehensive status information of the vehicle, where the comprehensive status information includes at least one of: steering wheel angle, head attitude of driver, and attitude of passenger in copilot;
a predicting unit 402, configured to predict, according to the comprehensive state information, a target area that needs to be observed by human eyes of the driver and is in an occluded state, where the target area includes at least one of: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle;
and a display unit 403, configured to control a left screen of a left a-pillar and/or a right screen of a right a-pillar of the vehicle to display an image according to the target area, where the left screen is a screen set on the left a-pillar, and the right screen is a screen set on the right a-pillar.
In a possible implementation, the prediction unit 402 is specifically configured to:
according to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces a right side rearview mirror of the vehicle;
determining whether the sight of the driver observing the right side rearview mirror of the vehicle is shielded by the passenger in the copilot position according to the head posture of the driver and the posture of the passenger in the copilot position;
and if the sight of the driver observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat, determining that the target area which the eyes of the driver need to observe and is in the blocked state is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
In another possible implementation manner, the display unit 403 is specifically configured to:
acquiring image data of at least one vehicle exterior camera in a scene area comprising a scene area corresponding to an image displayed by the right side rearview mirror in a plurality of vehicle exterior cameras of the vehicle;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror;
and displaying the target image on a right screen of the right A column.
In a further possible embodiment, in generating the target image adapted to the right screen of the right a-pillar according to the image data of the at least one exterior camera, the size of the right screen of the right a-pillar, and the imaging characteristics of the right rearview mirror, the display unit 403 is further configured to:
if the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera;
if the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to the multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror.
In another possible implementation manner, the prediction unit 402 is specifically configured to:
determining the pre-observation direction of the driver according to the comprehensive state information;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by a vehicle left side A column, and the target area is an area shielded by the vehicle left side A column and the farthest distance between an area boundary and the vehicle is less than a preset distance;
if the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to the right side A column of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the right side A column of the vehicle, and the target area is an area shielded by the right side A column of the vehicle, and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
In another possible implementation manner, the display unit 403 is specifically configured to:
determining the running state of the vehicle according to the comprehensive state information of the vehicle, wherein the running state at least comprises straight running or turning;
determining state information of a target object according to the state information outside the vehicle, wherein the state information outside the vehicle is image data outside the vehicle acquired by at least one camera outside the vehicle, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side of the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area;
if the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is other vehicles, when the driving state of the vehicle is straight and the distance between the other vehicles and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the other vehicles on a screen on the same side with the other vehicles, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side with the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
In yet another possible implementation, the display unit 403 is further configured to:
obtaining a vehicle data set, wherein the vehicle data set comprises a historical driving state of the vehicle and first speed information, and the driving state at least comprises straight driving or turning;
generating the target object data set according to the vehicle external state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle driving state information, vehicle trajectory information, and third speed information;
inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a vehicle 50 according to an embodiment of the present disclosure, where the vehicle 50 includes: an autopilot domain controller 501 and a memory 502. The autopilot domain controller 501 and the memory 502 may be connected by a bus or other means, and the present embodiment is exemplified by the bus connection.
The automatic driving area controller 501 is a computing core and a control core of the vehicle 50, and can analyze various commands in the vehicle 50 and various data of the vehicle 50, for example: the autonomous driving area controller 501 may be a Central autonomous driving area Controller (CPU), and may transmit various types of interactive data between internal structures of the vehicle 50, and so on. The memory 502 is a memory device in the vehicle 50 for storing programs and data. It is understood that the memory 502 herein may include a built-in memory of the vehicle 50, and may also include an expansion memory supported by the vehicle 50. Memory 502 provides storage space that stores the operating system of the vehicle 50, program code or instructions required for the respective operation by the automatic driving domain controller, and optionally, associated data generated by the automatic driving domain controller upon execution of the respective operation.
In the present embodiment, the vehicle 50 further includes a body area controller for controlling devices inside and/or outside the vehicle.
In an embodiment of the present application, autopilot domain controller 501 runs executable program code in memory 502 for performing the following operations:
acquiring integrated state information of the vehicle, wherein the integrated state information comprises at least one of the following: steering wheel angle, head attitude of driver, and attitude of passenger in copilot;
predicting a target area which needs to be observed by human eyes of the driver and is in a sheltered state according to the comprehensive state information, wherein the target area comprises at least one of the following: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle;
and controlling a left screen of a left A column and/or a right screen of a right A column of the vehicle to display images according to the target area, wherein the left screen is arranged on the left A column, and the right screen is arranged on the right A column.
In an alternative, in the aspect of predicting the target area which needs to be observed by the human eyes of the driver and is in the blocked state according to the comprehensive state information, the automatic driving area controller 501 is specifically configured to:
determining that the pre-observation direction of the driver is the right side of the vehicle and the head of the driver faces the right side rearview mirror of the vehicle according to the head posture of the driver;
determining whether the sight of the driver for observing the right side rearview mirror of the vehicle is shielded by the passenger in the front seat according to the head posture of the driver and the posture of the passenger in the front seat;
and if the sight of the driver observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat, determining that the target area which the eyes of the driver need to observe and is in the blocked state is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
In an alternative, in the aspect of controlling the image display of the left screen of the left a-pillar and/or the right screen of the right a-pillar of the vehicle according to the target area, the automatic driving range controller 501 is specifically configured to:
acquiring image data of at least one vehicle exterior camera in a scene area comprising a scene area corresponding to an image displayed by the right side rearview mirror in a plurality of vehicle exterior cameras of the vehicle;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror;
and displaying the target image on a right screen of the right A column.
In an alternative, in the aspect of generating the target image adapted to the right screen of the right a-pillar according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right a-pillar, and the imaging characteristics of the right rearview mirror, the automatic driving range controller 501 is specifically configured to:
if the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera;
if the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to the multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror.
In an alternative, in the aspect of predicting the target area which needs to be observed by the human eyes of the driver and is in the blocked state according to the comprehensive state information, the automatic driving area controller 501 is specifically configured to:
determining the pre-observation direction of the driver according to the comprehensive state information;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the scene area comprises a scene area which is shielded by a vehicle left side A column, and the target area is an area which is shielded by the vehicle left side A column and has the farthest distance between an area boundary and the vehicle smaller than a preset distance;
if the driver's direction of observation in advance is the right side of vehicle, just driver's head orientation the right side A post of vehicle, then according to among a plurality of outer cameras of vehicle area include by the image data of at least one outer camera of the scene region that vehicle right side A post sheltered from, confirm the target area, the target area is that the right side A post of vehicle sheltered from and regional boundary with the region that the farthest distance between the vehicle is less than the default distance.
In an alternative, in the aspect of controlling the image display of the left screen of the left a-pillar and/or the right screen of the right a-pillar of the vehicle according to the target area, the automatic driving range controller 501 is specifically configured to:
determining the running state of the vehicle according to the comprehensive state information of the vehicle, wherein the running state at least comprises straight running or turning;
determining state information of a target object according to the vehicle external state information, wherein the vehicle external state information is image data outside the vehicle, which is acquired through at least one vehicle external camera, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side of the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area;
if the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is other vehicles, when the driving state of the vehicle is straight and the distance between the other vehicles and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the other vehicles on a screen on the same side with the other vehicles, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side with the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
In one alternative, before determining the status information of the target object according to the off-board status information, the autopilot domain controller 501 is further configured to:
obtaining a vehicle data set, wherein the vehicle data set comprises a historical driving state of the vehicle and first speed information, and the driving state at least comprises straight driving or turning;
generating the target object data set according to the vehicle exterior state information, the target object data set comprising a pedestrian data set and/or a vehicle data set, the pedestrian data set comprising pedestrian form information and second speed information, the pedestrian form information comprising height and orientation, the vehicle data set comprising: vehicle driving state information, vehicle trajectory information, and third speed information;
inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
It should be noted that the implementation of each operation may also correspond to the corresponding description with reference to the method embodiment shown in fig. 3.
Embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program comprising program instructions which, when executed by an autonomous driving domain controller, cause the autonomous driving domain controller to carry out the operations performed by the embodiments.
Embodiments of the present application further provide a computer program product, which when running on an automatic driving domain controller, implements the operations performed by the embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a program, which is stored in a computer-readable storage medium and can include the processes of the embodiments of the methods described above when the program is executed. And the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Claims (9)

1. The image display method based on the visual field state of the driver in the driving scene is characterized in that the image display method is applied to an automatic driving domain controller of a domain controller system of a vehicle, the domain controller system comprises the automatic driving domain controller and a vehicle body domain controller, and the automatic driving domain controller is in communication connection with the vehicle body domain controller; the method comprises the following steps:
acquiring comprehensive state information of the vehicle, wherein the comprehensive state information comprises: the head posture of a driver and the posture of a passenger in a copilot;
predicting a target area which needs to be observed by human eyes of the driver and is in a sheltered state according to the comprehensive state information, wherein the target area comprises at least one of the following: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle;
controlling a left screen of a left A column and/or a right screen of a right A column of the vehicle to display images according to the target area, wherein the left screen is arranged on the left A column, and the right screen is arranged on the right A column;
the predicting of the target area which needs to be observed by human eyes of the driver and is in the shielded state according to the comprehensive state information comprises the following steps:
according to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to a right side rearview mirror of the vehicle;
determining whether the sight of the driver observing the right side rearview mirror of the vehicle is shielded by the passenger in the copilot position according to the head posture of the driver and the posture of the passenger in the copilot position;
and if the sight of the driver for observing the right side rearview mirror of the vehicle is shielded by the passenger in the front passenger seat, determining that the target area which is required to be observed by the eyes of the driver and is in the shielded state is a scene area corresponding to the image displayed by the right side rearview mirror of the vehicle.
2. The method according to claim 1, wherein if the target area is a viewing area corresponding to an image displayed on a right side rearview mirror of the vehicle, the integrated status information of the vehicle comprises an interior status information and an exterior status information, and the interior status information comprises a head posture of a driver and a passenger posture in a front passenger seat; the controlling the left screen of the left A column and/or the right screen of the right A column of the vehicle according to the target area to perform image display includes:
acquiring image data of at least one vehicle exterior camera in a scene area comprising a scene area corresponding to an image displayed by the right side rearview mirror in a plurality of vehicle exterior cameras of the vehicle;
generating a target image matched with the right screen of the right A column according to the image data of the at least one vehicle exterior camera, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror;
and displaying the target image on a right screen of the right A column.
3. The method according to claim 2, wherein the generating a target image that fits the right screen of the right a-pillar based on the image data of the at least one exterior camera, the dimensions of the right screen of the right a-pillar, and the imaging characteristics of the right rearview mirror comprises:
if the at least one vehicle exterior camera is single, generating a target image matched with the right screen of the right A column according to the image data displayed by the right rearview mirror, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror, which are acquired by the vehicle exterior camera;
if the number of the at least one vehicle exterior camera is multiple, fusing and generating target image data corresponding to the image displayed by the right side rearview mirror according to the multiple image data acquired by the vehicle exterior camera; and generating a target image matched with the right screen of the right A column according to the target image data, the size of the right screen of the right A column and the imaging characteristics of the right rearview mirror.
4. The method of claim 1, wherein the integrated status information includes at least one of a steering wheel angle or a driver head pose, the driver head pose including a driver head orientation; the predicting the target area which needs to be observed by human eyes of the driver and is in a shielded state according to the comprehensive state information comprises the following steps:
determining the pre-observation direction of the driver according to the comprehensive state information;
if the pre-observation direction of the driver is the left side of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by a left A column of the vehicle, and the target area is an area shielded by the left A column of the vehicle and the farthest distance between an area boundary and the vehicle is less than a preset distance;
if the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces to the right side A column of the vehicle, determining a target area according to image data of at least one vehicle exterior camera of a plurality of vehicle exterior cameras of the vehicle, wherein the view area comprises a view area shielded by the right side A column of the vehicle, and the target area is an area shielded by the right side A column of the vehicle, and the farthest distance between an area boundary and the vehicle is smaller than a preset distance.
5. The method according to any one of claims 1 to 4, wherein if the target area is an area which is blocked by a left A-pillar of the vehicle and in which the farthest distance between an area boundary and the vehicle is less than a preset distance, or an area which is blocked by a right A-pillar of the vehicle and in which the farthest distance between an area boundary and the vehicle is less than a preset distance, the comprehensive status information of the vehicle comprises in-vehicle status information and out-vehicle status information, and the in-vehicle status information comprises a head posture and a steering wheel angle of a driver; the controlling the left screen of the left A column and/or the right screen of the right A column of the vehicle according to the target area to perform image display includes:
determining the running state of the vehicle according to the comprehensive state information of the vehicle, wherein the running state at least comprises straight running or turning;
determining state information of a target object according to the state information outside the vehicle, wherein the state information outside the vehicle is image data outside the vehicle acquired by at least one camera outside the vehicle, the target object comprises pedestrians and/or other vehicles, and the state information of the target object comprises the distance between the target object and the vehicle;
if the target object is a pedestrian, when the driving state of the vehicle is straight and the distance between the pedestrian and the vehicle is smaller than or equal to a first distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side of the pedestrian, and outputting first prompt information, wherein the first prompt information is used for prompting the driver to observe a screen corresponding to the target area;
if the target object is a pedestrian, when the driving state of the vehicle is turning and the distance between the pedestrian and the vehicle is smaller than or equal to a second distance, displaying a target image of a target area corresponding to the pedestrian on a screen on the same side with the pedestrian, and outputting first prompt information, wherein the second distance is smaller than the first distance;
if the target object is other vehicles, when the driving state of the vehicle is straight and the distance between the other vehicles and the vehicle is smaller than or equal to a third distance, displaying a picture of a target area corresponding to the other vehicles on a screen on the same side with the other vehicles, and outputting first prompt information;
if the target object is another vehicle, when the driving state of the vehicle is turning and the distance between the another vehicle and the vehicle is smaller than or equal to a fourth distance, displaying a picture of a target area corresponding to the another vehicle on a screen on the same side of the another vehicle, and outputting first prompt information, wherein the fourth distance is smaller than the third distance.
6. The method of claim 5, wherein prior to determining the status information of the target object from the off-board status information, the method further comprises:
acquiring a vehicle data set, wherein the vehicle data set comprises historical driving states of the vehicle and first speed information, and the driving states at least comprise straight driving or turning;
generating the target object dataset from the off-board state information, the target object dataset comprising a pedestrian dataset and/or a vehicle dataset, the pedestrian dataset comprising pedestrian morphology information and second velocity information, the pedestrian morphology information comprising height and heading, the vehicle dataset comprising: vehicle running state information, vehicle track information and third speed information;
inputting the target object data set and the vehicle data set into a safe distance prediction model to obtain a safe distance between the target object and the vehicle, wherein the safe distance includes the first distance, the second distance, the third distance or the fourth distance, the safe distance prediction model is a model obtained by training according to a plurality of target object data set samples, corresponding vehicle data set samples and corresponding safe distances, the target object data set and the vehicle data set belong to feature data, and the safe distance belongs to tag data.
7. An image display device based on a driver's visual field state in a driving scene, the device comprising:
an acquisition unit configured to acquire integrated status information of a vehicle, the integrated status information including: the head posture of a driver and the posture of a passenger in a copilot;
the prediction unit is used for predicting a target area which needs to be observed by human eyes of the driver and is in a blocked state according to the comprehensive state information, and the target area comprises at least one of the following: the image display device comprises an area which is shielded by a left A column of the vehicle and the farthest distance between an area boundary and the vehicle is smaller than a preset distance, an area which is shielded by a right A column of the vehicle and the farthest distance between the area boundary and the vehicle is smaller than the preset distance, and a viewing area corresponding to an image displayed by a right rearview mirror of the vehicle;
the display unit is used for controlling a left screen of a left A column and/or a right screen of a right A column of the vehicle to display images according to the target area, wherein the left screen is a screen arranged on the left A column, and the right screen is a screen arranged on the right A column;
wherein, in the aspect of predicting the target area which needs to be observed by the eyes of the driver and is in the shielded state according to the comprehensive state information, the prediction unit is specifically configured to:
according to the head posture of the driver, determining that the pre-observation direction of the driver is the right side of the vehicle, and the head of the driver faces a right side rearview mirror of the vehicle;
determining whether the sight of the driver for observing the right side rearview mirror of the vehicle is shielded by the passenger in the front seat according to the head posture of the driver and the posture of the passenger in the front seat;
and if the sight of the driver observing the right side rearview mirror of the vehicle is blocked by the passenger in the front passenger seat, determining that the target area which the eyes of the driver need to observe and is in the blocked state is the viewing area corresponding to the image displayed by the right side rearview mirror of the vehicle.
8. A vehicle, characterized by comprising: an autonomous driving domain controller, a memory and a computer program stored on the memory and operable on the autonomous driving domain controller, the computer program, when executed by the autonomous driving domain controller, implementing the method of any of claims 1-6.
9. A computer-readable storage medium, in which a computer program is stored which, when run on an automatic driving domain controller, carries out the method according to any one of claims 1 to 6.
CN202211720123.3A 2022-12-30 2022-12-30 Image display method and device based on driver visual field state in driving scene Active CN115675289B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211720123.3A CN115675289B (en) 2022-12-30 2022-12-30 Image display method and device based on driver visual field state in driving scene
CN202310447659.0A CN116674468A (en) 2022-12-30 2022-12-30 Image display method and related device, vehicle, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211720123.3A CN115675289B (en) 2022-12-30 2022-12-30 Image display method and device based on driver visual field state in driving scene

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310447659.0A Division CN116674468A (en) 2022-12-30 2022-12-30 Image display method and related device, vehicle, storage medium, and program

Publications (2)

Publication Number Publication Date
CN115675289A CN115675289A (en) 2023-02-03
CN115675289B true CN115675289B (en) 2023-04-07

Family

ID=85056901

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211720123.3A Active CN115675289B (en) 2022-12-30 2022-12-30 Image display method and device based on driver visual field state in driving scene
CN202310447659.0A Pending CN116674468A (en) 2022-12-30 2022-12-30 Image display method and related device, vehicle, storage medium, and program

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310447659.0A Pending CN116674468A (en) 2022-12-30 2022-12-30 Image display method and related device, vehicle, storage medium, and program

Country Status (1)

Country Link
CN (2) CN115675289B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116039662B (en) * 2023-03-30 2023-08-08 深圳曦华科技有限公司 Automatic driving control method and related device
CN116620168B (en) * 2023-05-24 2023-12-12 江苏泽景汽车电子股份有限公司 Barrier early warning method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4995029B2 (en) * 2007-10-18 2012-08-08 富士重工業株式会社 Vehicle driving support device
CN109305105A (en) * 2018-11-29 2019-02-05 北京车联天下信息技术有限公司 A kind of pillar A blind monitoring device, vehicle and method
CN110682867A (en) * 2019-10-22 2020-01-14 一汽轿车股份有限公司 Screen display rearview mirror system
CN112224133A (en) * 2020-10-20 2021-01-15 中国第一汽车股份有限公司 Streaming media rearview mirror control method and device, vehicle and storage medium
CN114619964A (en) * 2022-04-20 2022-06-14 芜湖汽车前瞻技术研究院有限公司 Display system and intelligent vehicle of intelligence passenger cabin

Also Published As

Publication number Publication date
CN116674468A (en) 2023-09-01
CN115675289A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN115675289B (en) Image display method and device based on driver visual field state in driving scene
US11305695B1 (en) System and method for enhancing driver situational awareness in a transportation vehicle
CN108602465B (en) Image display system for vehicle and vehicle equipped with the same
CN107444263B (en) Display device for vehicle
JP6410879B2 (en) Mirror replacement system for vehicles
US10981507B1 (en) Interactive safety system for vehicles
WO2007105792A1 (en) Monitor and monitoring method, controller and control method, and program
KR102494865B1 (en) Vehicle, and control method for the same
CN111183068A (en) Parking control method and parking control device
CN111200689B (en) Projector for mobile body, portable terminal, and display method for portable terminal
US20190135169A1 (en) Vehicle communication system using projected light
US20200278743A1 (en) Control device
CN111183066A (en) Parking control method and parking control device
CN114556253A (en) Sensor field of view in self-driving vehicles
CN115257540A (en) Obstacle prompting method, system, vehicle and storage medium
JP2019103044A (en) Image display device and parking support system
US10981506B2 (en) Display system, vehicle control apparatus, display control method, and storage medium for storing program
US11345288B2 (en) Display system, vehicle control apparatus, display control method, and storage medium for storing program
CN112406703A (en) Vehicle and control method and control device thereof
US20220292686A1 (en) Image processing apparatus, image processing method, and computer-readable storage medium storing program
US20200119474A1 (en) Connector device and connector system
CN114523905A (en) System and method for displaying detection and track prediction of targets around vehicle
CN112810535A (en) Automobile A column blind area monitoring system, method and terminal
CN113386669A (en) Driving assistance system and vehicle
KR20220113929A (en) vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant