CN117022117A - Blind area display method and vehicle - Google Patents

Blind area display method and vehicle Download PDF

Info

Publication number
CN117022117A
CN117022117A CN202311100365.7A CN202311100365A CN117022117A CN 117022117 A CN117022117 A CN 117022117A CN 202311100365 A CN202311100365 A CN 202311100365A CN 117022117 A CN117022117 A CN 117022117A
Authority
CN
China
Prior art keywords
vehicle
blind area
adjacent
blind
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311100365.7A
Other languages
Chinese (zh)
Inventor
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202311100365.7A priority Critical patent/CN117022117A/en
Publication of CN117022117A publication Critical patent/CN117022117A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K37/00Dashboards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle
    • B60R16/0232Circuits relating to the driving or the functioning of the vehicle for measuring vehicle parameters and indicating critical, abnormal or dangerous conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • B60R2300/8026Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views in addition to a rear-view mirror system

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a blind area display method and a device, wherein the method comprises the following steps: acquiring object information of a nearby object of the target vehicle; determining a blind area of the adjacent object based on the object information of the adjacent object; and displaying the blind area of the adjacent object.

Description

Blind area display method and vehicle
Technical Field
The application relates to the technical field of blind area detection, in particular to a blind area display method and a vehicle.
Background
When a driver drives the vehicle, the driver can observe partial areas in front of and at two sides of the vehicle, but cannot observe areas behind and at the rear of the vehicle, so that blind areas can be formed, and potential safety hazards exist in the running process of the vehicle.
In the prior art, the sensor arranged around the vehicle is used for detecting the surrounding of the current vehicle to acquire the image around the vehicle, so that a driver can intuitively observe the real-time image around the vehicle, and the danger caused by the condition that the driver cannot observe the blind area is avoided.
However, when the current vehicle is driven into a blind area of another vehicle or the current vehicle cannot be observed by other vehicles or pedestrians due to the shielding of an obstacle area, the danger is easy to occur, but the related blind area determining and displaying method is not involved in the prior art.
Disclosure of Invention
In view of this, an embodiment of the present application provides a blind zone display method, including: acquiring object information of a nearby object of the target vehicle; determining a blind area of the adjacent object based on the object information of the adjacent object; and displaying the blind area of the adjacent object.
In some embodiments, the nearby objects include vehicle objects; the object information comprises the position, the orientation and the vehicle type of a vehicle object; the determining the blind area of the adjacent object based on the object information of the adjacent object includes: determining a blind area of the vehicle object based on the position, the orientation and the vehicle type of the vehicle object; the blind area of the vehicle object is an area that cannot be observed by a driver of the vehicle object.
In some embodiments, the nearby objects include obstacle objects; the object information includes an obstacle boundary of the obstacle object; the determining the blind area of the adjacent object based on the object information of the adjacent object further includes: determining a preset blind area of the obstacle object based on the obstacle boundary; the preset blind area represents an area which cannot be observed by other objects due to shielding of the obstacle.
In some embodiments, the nearby objects include pedestrian objects; the object information includes a line-of-sight range of a pedestrian object; the determining the blind area of the adjacent object based on the object information of the adjacent object includes: and determining the blind area of the pedestrian object based on the sight line range of the pedestrian.
In some embodiments, the displaying the blind zone of the nearby object includes at least one of:
displaying a bird's eye view, wherein the bird's eye view comprises a first image of the target vehicle and a second image corresponding to a blind area of the adjacent object; and displaying a second image corresponding to the blind area of the adjacent object at a target position of a display panel, wherein the target position of the display panel is a position at which a driver of the target vehicle observes the adjacent object in a real scene through the display panel.
In some embodiments, the method comprises at least one of:
in the case of comprising at least two adjacent objects, the blind areas of different adjacent objects are displayed in different colors;
in the case where the blind areas of the adjacent objects include different types of sub-blind areas, the different types of sub-blind areas are displayed in different colors.
In some embodiments, after the displaying the blind zone of the nearby object, the method further comprises:
Acquiring the position of the target vehicle; determining the relative position of the target vehicle and the blind area of the adjacent object based on the position of the target vehicle and the blind area of the adjacent object; and sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects.
In some embodiments, after the displaying the blind zone of the nearby object, the method further comprises:
acquiring a spatial relationship between the target vehicle and the adjacent object; determining a risk level of a blind zone of the nearby object based on the spatial relationship; and adjusting the display state of the blind area of the adjacent object based on the risk level of the blind area of the adjacent object.
The embodiment of the application provides a blind area determining device, which comprises:
the acquisition module is used for acquiring object information of an adjacent object of the target vehicle;
the determining module is used for determining the blind area of the adjacent object based on the object information of the adjacent object;
and the display module is used for displaying the blind areas of the adjacent objects.
An embodiment of the present application provides a vehicle including:
a vehicle body;
the acquisition module is used for acquiring object information of an adjacent object of the target vehicle;
The determining module is used for determining the blind area of the adjacent object based on the object information of the adjacent object;
and the display module is used for displaying the blind areas of the adjacent objects.
According to the technical scheme provided by the embodiment of the application, the blind areas corresponding to the adjacent objects are determined by acquiring the object information of the adjacent objects of the target vehicle, the blind areas are displayed to the driver of the target vehicle, and the blind areas with different danger degrees are displayed in different colors or display modes aiming at different blind area types, so that the driver looks at the blind areas and focuses on the areas, and the driving safety is improved.
Drawings
Fig. 1 is a schematic flow chart of a blind zone display method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
Fig. 8 is a schematic flow chart of a blind area displaying method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a blind area determining device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a vehicle according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application.
Detailed Description
For a more complete understanding of the nature and the technical content of the embodiments of the present application, reference should be made to the following detailed description of embodiments of the application, taken in conjunction with the accompanying drawings, which are meant to be illustrative only and not limiting of the embodiments of the application.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
In the embodiment of the present application, the technical scheme of the present application may be implemented on any electronic device, for example, an electronic device with data processing capability, such as a vehicle computer, an external computer, a tablet computer, a desktop computer, a notebook computer, and a server.
Fig. 1 is a schematic flow chart of a method for blind zone display according to an embodiment of the present application. As shown in fig. 1, the method may include steps 101 to 103:
step 101, acquiring object information of a nearby object of the target vehicle.
In some embodiments, the nearby object of the target vehicle may be an object within a preset range of the target vehicle. For example, a circular area may be defined with the target vehicle as a center and a preset length as a radius, and the area may be regarded as a vicinity of the target vehicle.
For example, the preset length may be related to the driving habit of the driver, for example, the driving habit of the current driver is more aggressive, and the driving speed is faster, and then the preset length may be appropriately increased to increase the advance of the blind zone prompt. For another example, the driving habit of the current driver is relatively steady, and the driving speed is moderate, so that the preset length is not increased, and the driver is prevented from being disturbed by early prompt. The adjacent object may be an object with viewing capability or an obstacle that may block the line of sight of other objects. By way of example, the nearby object may be, but is not limited to, a vehicle, a pedestrian, a tree, a building, and the like.
In some embodiments, the object information is location information, type information, etc. of the nearby object. The location information may be a location of the nearby object in a world coordinate system or a location relative to the target vehicle, and the type information may be an object type of the nearby object, such as an obstacle, a pedestrian, a vehicle, and the like. After determining that the nearby object is of any of the above types, the classification may be further refined, for example, the nearby object is a vehicle, and the vehicle may be further classified into a large vehicle, a small vehicle, and the like.
In some embodiments, the object may be further refined for different neighbors. For example, when the nearby object is a vehicle, the object information may further include information of an orientation, a vehicle type, and the like of the vehicle; the vehicle model refers to the model of the vehicle, and optionally, the object information may also include shape information of the vehicle, such as a size, a contour, and the like, and the object information includes model information and shape information of the vehicle, and may also include any one of them. When the nearby object is a pedestrian, the object information may include a walking direction, a line-of-sight range, a body posture, and the like of the pedestrian; when the nearby object is a building, the object information may include a size, a shape, etc. of the building.
In some embodiments, the object information may be acquired by an in-vehicle device. Wherein the vehicle-mounted devices include, but are not limited to, vehicle-mounted cameras, sensors, and the like. For example, the camera installed around the target vehicle may capture an image of the nearby environment, and the captured image may be recognized to obtain object information of the nearby object. The sensor installed on the target vehicle can also be used for detecting the adjacent object, and the object information of the adjacent object can be obtained. The sensor may be, for example, an infrared sensor, an ultrasonic sensor, or the like.
It should be noted that, during the driving of the target vehicle, the surrounding adjacent objects are continuously changed, so that the information of the adjacent objects is continuously acquired during the driving of the target vehicle.
And 102, determining a blind area of the adjacent object based on the object information of the adjacent object.
The blind area of the adjacent object is an area which cannot be observed by the adjacent object or an area which cannot be observed by the adjacent object when the sight of other objects is blocked by the adjacent object. Wherein the other object may be a pedestrian or other vehicle than the target vehicle.
The adjacent objects are exemplified as large vehicles, and the driver of the large vehicle can observe partial areas on two sides of the vehicle through the rearview mirror, but cannot observe areas under the windshield right in front of the vehicle, areas right behind the vehicle and areas behind the vehicle side, namely blind areas of the large vehicle, namely areas which cannot be observed by the driver of the large vehicle. The target vehicle and the pedestrian are respectively arranged on two sides of the wall body, and the target vehicle cannot be observed by the pedestrian due to the shielding of the wall body, namely, the sight of the pedestrian is shielded by the wall body, and the area where the sight of the pedestrian cannot be observed due to the shielding of the wall body is the blind area of the wall body.
In some embodiments, the blind areas of the adjacent objects can be determined by matching the acquired object information with preset information, wherein the preset information comprises the blind area ranges of various adjacent objects. The preset information may be data information stored in the vehicle-mounted device, and the data information may be obtained through artificial intelligence (Artificial Intelligence AI) learning in advance or may be obtained based on big data analysis. The preset information can also be obtained through networking or other communication modes of the vehicle-mounted equipment or other equipment, the method for obtaining the preset information is not limited, and any method capable of obtaining the blind area range of various adjacent objects can be used in the embodiment.
For example, the blind area of each vehicle type can be basically determined, so that the blind area range corresponding to each vehicle type can be pre-stored in the vehicle-mounted equipment or can be obtained through networking and other communication modes. For large buildings, a larger range can be directly determined as a blind area, so that a driver is prompted to slow down in advance, and danger is avoided. Further, the blind area of the adjacent object of the target vehicle can be directly determined based on the matching result, and the acquired object information can be analyzed in real time through the vehicle-mounted equipment system to determine the blind area of the adjacent object. For example, a pedestrian may look around while walking, where a real-time analysis of the line of sight of the pedestrian may determine the extent of its blind area.
In addition, based on the method described in step 101, the acquired object information is continuously changed during the driving of the target vehicle, so that the dead zone determined based on the changed object information is also continuously changed, that is, the dead zone of the surrounding adjacent object is continuously determined.
And 103, displaying the blind areas of the adjacent objects.
In some embodiments, the blind area displaying the adjacent object may project the blind area of the adjacent object, so that the driver can observe a picture in which the projected blind area overlaps with the actual scene. For example, a blind area is projected on the front windshield, at this time, a driver can directly observe an adjacent object and a blind area range corresponding to the adjacent object through the windshield, an image formed by overlapping the blind area and a real scene can be displayed on a central control screen for the driver to check, a virtual image of the surrounding environment can be generated in real time based on the acquired object information, and the blind area of the adjacent object is marked on the virtual image for display. The above is merely an exemplary description of the display method, and in the actual method, any display method that can cause the driver of the target vehicle to acquire the blind area of the nearby object may be used in this step.
Based on the above embodiment, in the running of the vehicle, the different objects each have a corresponding blind area range. In this case, if only the blind area range of the target vehicle itself is determined, the driver can avoid the object in the blind area of the target vehicle, but if the driver enters the blind area of another object, it is not possible to well determine whether the current position is safe or not, and there is still a certain potential safety hazard. Therefore, the embodiment of the application determines the blind area of the adjacent object based on the object information of the adjacent object of the target vehicle and displays the blind area. The method can enable the driver of the target vehicle to timely acquire the blind areas of the adjacent objects, and pre-adjust the driving strategy to avoid danger caused by the fact that other objects cannot observe the target objects and avoid the target objects. Meanwhile, in the running process of the target vehicle, the steps are continuously executed, the blind area information of surrounding objects is obtained in real time, the range of the blind area is adjusted in real time, and the driver is ensured to always obtain the accurate blind area range in the driving process.
In some embodiments, the nearby object may be a vehicle object, and the object information may include a position, an orientation, and a vehicle type of the vehicle object. Fig. 2 is a schematic flow chart of an alternative blind zone display method provided in an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 102 in fig. 1 may be updated to step 1021, and the method is described in connection with the steps shown in fig. 2, and includes the following steps:
step 101, acquiring object information of a nearby object of the target vehicle.
And 1021, determining the blind area of the vehicle object based on the position, the orientation and the vehicle type of the vehicle object. The blind area of the vehicle object is an area which cannot be observed by a driver of the vehicle object.
The position of the vehicle object is the position of the vehicle object in a world coordinate system, the direction of the vehicle object is the running direction of the vehicle object, and the vehicle type of the vehicle object is the type of the vehicle object. For example, the types of the vehicle objects may be tricycles, sedans, SUVs, trucks, semitrailers, etc., and it is to be noted that there is a large difference in the blind area ranges between different types of vehicles, and the blind area ranges of the same type of vehicle may be regarded as fixed, so that the blind area ranges of the vehicles may be determined in combination with specific vehicle types.
In some embodiments, the blind zone range of the vehicle may be determined based on the model of the vehicle. Illustratively, the blind spot of the small vehicle is mainly concentrated in the positions of the right rear of the vehicle, the right and left doors below, the front of the hood, and the right and left front wheels. The blind areas of the large and medium-sized vehicles are mainly concentrated at the rear and side surfaces of the vehicles, and it is noted that the blind areas of the large trucks are also in a certain range in front of the vehicles due to the fact that the vehicle bodies are high.
Further, in some embodiments, after the blind area range of the vehicle object is acquired, the blind area is further combined with the vehicle object by referring to the orientation and the position of the vehicle object, so as to display the correct blind area around the vehicle object. It is understood that the direction of the vehicle object, that is, the traveling direction of the vehicle object, can be generally determined by the position of the driver with respect to the entire vehicle object. For example, some vehicles are driven by left rudder and some are driven by right rudder, but the driver is located at the head of the vehicle, i.e. the driver is facing the vehicle, i.e. the direction of travel of the vehicle, and thus the blind zone of the vehicle is the area surrounding the cockpit. Therefore, after the dead zone of the corresponding vehicle type of the vehicle object is determined, the dead zone of the vehicle can be combined with the actual vehicle object by combining the position of the vehicle object, so that an accurate dead zone range of the adjacent vehicle can be obtained.
It should be noted that after the blind area range of the vehicle object corresponding to the vehicle type is determined, the blind area is combined with the real vehicle object to accurately reflect the actual blind area of the vehicle object, so as to display the blind area for the driver of the target vehicle to refer to. At this time, the obtained direction and position of the vehicle object can instruct the system how to integrate the blind area with the adjacent vehicle, so that the driver can observe the image after the blind area is combined with the vehicle in real time, and further make a judgment, and adjust the driving strategy in advance.
And 103, displaying the blind areas of the adjacent objects.
Based on the embodiment, a driver can acquire the blind area range of surrounding vehicles in advance, reasonably drive by combining the driving condition of the driver, avoid traffic accidents caused by entering other blind areas of the vehicles, and improve the driving safety.
In some embodiments, the nearby objects include obstacle objects; the object information includes an obstacle boundary of an obstacle object. Fig. 3 is a schematic flow chart of an alternative blind zone display method according to an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 102 in fig. 1 may be updated to step 1022, which is described in connection with the steps shown in fig. 3, the method comprising the steps of:
Step 101, acquiring object information of a nearby object of the target vehicle.
Step 1022, determining a preset blind area of the obstacle object based on the obstacle boundary. The boundary of the obstacle represents the outline of the obstacle, based on which the information of the size, shape and the like of the obstacle can be judged, and in practical application, the type of the obstacle can be primarily judged according to the boundary of the obstacle, for example, the obstacle with obvious characteristics such as the boundary of a tree, a building and the like.
In some embodiments, the obstacle object may be a tree, a vehicle parked at a roadside, a roadside bus kiosk, or the like. At this time, only a part of the target vehicle may be blocked by the obstacle, that is, at this time, a part of the target vehicle is in the blind area of the other vehicle. Meanwhile, as the target vehicle and other objects move, the area where the vision of other objects is blocked by the small obstacle also changes, and the portion where the target vehicle can be observed also changes. Based on the above, the obstacle blind area at this time can be dynamically determined by combining the boundary of the small obstacle and the position of other objects, and the driver is presented when the portion of the target vehicle that is blocked is large.
By way of example, trees are planted at the intersection, and when the target vehicle arrives at the intersection, other vehicles can only see a small part of the body of the target vehicle due to the shielding of the trees, so that the target vehicle is easy to ignore. And as the vehicle continues to run, other vehicles can observe most of the vehicle bodies of the target vehicle, and at the moment, danger is easily generated when the vehicle is braked again, so that the blind areas of other objects can be dynamically determined by combining the positions of the other objects and the boundaries of the obstacles, and the driver is prompted that the other vehicles possibly cannot see the target vehicle when the blocked area of the target vehicle is larger.
In some embodiments, the obstacle object may be a large obstacle such as a tall building, a fence, or the like. At this time, the target vehicle cannot know whether other objects exist on the other side of the obstacle, and therefore cannot determine the positions of the other objects at this time, so that a dynamic blind area generated by shielding the sight of the other objects by the obstacle is obtained in real time. In view of this, when a large obstacle is detected, a preset blind area may be automatically set for the obstacle, that is, whether or not an object exists on the other side of the obstacle, the vehicle is considered to enter the obstacle blind area, and the range of the blind area may be determined based on the current speed of the target vehicle. For example, when the vehicle is traveling at a faster speed, a large building is detected nearby, and a large area on one side of the obstacle target vehicle can be determined as a blind area of the obstacle, so that the driver is prompted in advance to pay attention to the fact that other objects may appear on the other side of the obstacle; the above-described preset blind area range can be appropriately narrowed when the vehicle is running at a low vehicle speed.
And 103, displaying the blind areas of the adjacent objects.
Based on the embodiment, the blind area can be flexibly set for different types of obstacles, the blind area range can be dynamically determined for the positions of other objects under the condition that the positions of other objects can be acquired, and a driver can be prompted by a method of presetting the blind area when the vehicle faces to a large obstacle, so that the accuracy and the reliability of determining the blind area of the obstacle are improved, and the traffic safety is also improved to a certain extent.
In some embodiments, the nearby objects include pedestrian objects; the object information includes a line-of-sight range of a pedestrian object. Fig. 4 is a schematic flow chart of an alternative blind zone display method provided in an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 102 in fig. 1 may be updated to step 1023, and the method will be described in connection with the steps shown in fig. 4, and includes the following steps:
step 101, acquiring object information of a nearby object of the target vehicle.
Step 1023, determining a blind area of the adjacent object based on the object information of the adjacent object.
In some embodiments, the line of sight of the pedestrian object may be determined by the pedestrian action. For example, a pedestrian may look at a mobile phone while walking, at which time the pedestrian may be considered to be unable to observe the surrounding environment, and the area around the pedestrian may be determined as a blind area. For another example, a pedestrian walks in a normal posture, at which time the pedestrian can observe the environment immediately in front, and the area behind the pedestrian can be determined as a blind area.
And 103, displaying the blind areas of the adjacent objects.
It should be noted that, because there is certain randomness in the motion of the pedestrian, and the pedestrian can look around, its blind area range changes comparatively frequently, can also confirm the area in a certain range around the pedestrian as the first blind area of pedestrian, and the second blind area of pedestrian is confirmed to the sight scope of combining the pedestrian again, so can guarantee the security of driving when improving the accuracy of pedestrian blind area determination.
In some embodiments, displaying the blind area of the adjacent object may also be achieved by displaying a bird's eye view and/or displaying a second image corresponding to the blind area of the adjacent object at the target location of the display panel. Fig. 5 is a schematic flow chart of an alternative blind zone display method provided in an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 103 in fig. 1 may be updated to step 1031 and/or step 1032, and the method is described in connection with the steps shown in fig. 5, and includes the following steps:
step 101, acquiring object information of a nearby object of the target vehicle.
And 102, determining a blind area of the adjacent object based on the object information of the adjacent object.
And 1031, displaying a bird's eye view, wherein the bird's eye view comprises a first image of the target vehicle and a second image corresponding to the blind area of the adjacent object.
Step 1032, displaying a second image corresponding to the blind area of the adjacent object on a target position of a display panel, where the target position of the display panel is a position where a driver of the target vehicle observes the adjacent object in the real scene through the display panel.
In some embodiments, the aerial view may be synthesized from pictures taken by an in-vehicle imaging device. Specifically, the images captured by the imaging devices around the target vehicle may be stitched together to form a bird's-eye view image including the target vehicle, then a blind area of an adjacent object of the target vehicle is marked on the bird's-eye view image, and the bird's-eye view image including the blind area is displayed on a central control screen of the vehicle. The method has the advantage that the display of the blind area on the real image can enable a driver to quickly determine the actual range of the blind area on the aerial view.
In some embodiments, the aerial view may further generate a virtual aerial view through the acquired object information of the nearby object of the target vehicle. The virtual aerial view can display a virtual model comprising the target vehicle and the adjacent object, and can also generate blind areas of the virtual model of the adjacent object at the same time, so that the virtual aerial view comprising the target vehicle and the blind areas of the adjacent object is displayed on the central control screen. The virtual aerial view has the advantage that unnecessary details in the actual environment can be omitted, so that a driver can more intuitively know the blind area range of the adjacent object.
In some embodiments, a blind area of an adjacent object may be projected onto a front windshield of a vehicle through Head Up Display (HUD), and a driver may observe a picture in which a blind area image displayed by the device is superimposed with the adjacent object in an actual scene through the windshield.
In some embodiments, the blind area of the adjacent object and the real scene can be combined and displayed on the front windshield of the target vehicle through augmented reality (Augmented Reality AR), specifically, the eye position of the driver can be identified through the vehicle-mounted device, the image displayed by the AR can be adjusted in real time by combining the surrounding environment information, the virtual blind area image is combined with the actual scene, the driver can observe the blind area of the adjacent object while observing the road condition, and the AR imaging can be adjusted along with the eye position, so that the vehicle-mounted blind area display device has better use experience, and the suitability of the blind area display and the real scene is higher.
In some embodiments, the method further comprises displaying the blind zone of different nearby objects in different colors in the case of comprising at least two nearby objects and/or displaying the different types of sub-blind zones in different colors in the case of the blind zone of the nearby objects comprising different types of sub-blind zones. Fig. 6 is a schematic flow chart of an alternative blind zone display method provided by an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 103 in fig. 1 may further comprise step 1033 and/or step 1034, described in connection with the steps shown in fig. 6, the method comprising the steps of:
Step 101, acquiring object information of a nearby object of the target vehicle.
And 102, determining a blind area of the adjacent object based on the object information of the adjacent object.
Step 1033, in the case of including at least two adjacent objects, the blind areas of different adjacent objects are displayed in different colors.
Step 1034, in the case that the blind areas of the adjacent objects include different types of sub-blind areas, the different types of sub-blind areas are displayed in different colors.
Wherein the sub-blind areas may be classified into different types, such as front, side, rear, etc., based on their orientation with respect to the adjacent object.
In some embodiments, where there are multiple nearby objects in the scene, different colors may be employed in displaying the dead zone of different objects. For example, the blind area of the vehicle around the target vehicle may be displayed in red, the blind area of the pedestrian may be displayed in green, and the blind area of the obstacle may be displayed in yellow. Therefore, the driver can be helped to better judge the blind area range of different objects, especially when a plurality of blind areas overlap, for example, the target vehicle and the front vehicle are all in the blind area of the obstacle, and the blind area of the front vehicle is displayed in a color different from that of the blind area of the obstacle at the moment, so that the driver of the target vehicle can be helped to distinguish.
In some embodiments, for a nearby object having multiple sub-blind areas, different types of sub-blind areas for the nearby object may be presented in different colors. Illustratively, the adjacent objects are large trucks, and the large trucks have a plurality of blind areas, the risk coefficients of the rear and side rear blind areas are highest, the blind areas can be marked as red or yellow, and the blind areas on the front and the two sides of the vehicle door can be displayed in green or blue.
In some embodiments, for a nearby object having multiple sub-blind areas, different colors may be set to display based on the distance between the blind area and the nearby object, and illustratively, the blind area in a certain range closest to the large truck is displayed in red, the blind area in a sub-close range is displayed in yellow, and the blind areas are labeled with different colors according to the distance from the large truck, i.e., each sub-blind area is transited from a striking color or a highlighting color to other colors.
Based on the above embodiment, the driver can be assisted in distinguishing different blind areas by adopting different color display blind areas, and the dangerous degree of the area can be judged by the color of the blind areas, so that the information quantity which can be acquired by the driver in the blind area display process is increased, and the traffic safety is further improved.
In some embodiments, the method further comprises obtaining a location of the target vehicle; determining the relative position of the target vehicle and the blind area of the adjacent object based on the position of the target vehicle and the blind area of the adjacent object; and sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects. Fig. 7 is a schematic flow chart of an alternative blind zone display method provided in an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 103 in fig. 1 further includes step 104, step 104 includes steps 1041 to 1043, and the method includes the following steps, which are described in connection with the steps shown in fig. 7:
step 101, acquiring object information of a nearby object of the target vehicle.
And 102, determining a blind area of the adjacent object based on the object information of the adjacent object.
Step 1041, obtaining a position of the target vehicle.
Step 1042, determining the relative position of the target vehicle and the blind area of the nearby object based on the position of the target vehicle and the blind area of the nearby object.
And 1043, sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects.
In some embodiments, the relative position of the target vehicle and the blind area of the nearby object can be determined by acquiring the position of the target vehicle and combining the blind area of the nearby object, and prompt information is sent when the target vehicle enters the blind area or approaches the blind area. For example, when the target vehicle entering blind area duration exceeds the preset duration, or the part exceeding proportion of the target vehicle entering blind area duration, a first prompt message may be sent, where the first prompt message is used to prompt the driver to drive away as soon as possible or pay attention to driving safety when the driver is located in the blind area of the adjacent object. On the other hand, the second prompt information can be sent out when the distance between the target vehicle and the blind area is smaller than the preset value, so that the driver is prompted that the vehicle is about to enter the blind area, and the driver is notified of safety. In still another aspect, the driving speed of the target vehicle may be obtained, and the pre-estimated target vehicle may enter the blind area when it is likely to enter the blind area, and the third prompting information may be sent to prompt the driver to possibly enter the blind area of the adjacent object after a period of time.
Based on the embodiment, different prompt messages can be sent out under the condition that the target vehicle and the blind area are in different relative positions, and the driver is prompted to pay attention to the blind area existing nearby in the whole running process of the target vehicle, so that the running safety is improved.
In some embodiments, the method further comprises obtaining a spatial relationship of the target vehicle and the nearby object, determining a risk level of a blind zone of the nearby object based on the spatial relationship, and adjusting a presentation status of the blind zone of the nearby object based on the risk level of the blind zone of the nearby object. Fig. 8 is a schematic flow chart of an alternative blind zone display method provided in an embodiment of the present application, which may be executed by a processor of a computer device. Based on fig. 1, step 103 in fig. 1 may further comprise steps 1035 to 1037, described in connection with the steps shown in fig. 8, the method comprising the steps of:
step 101, acquiring object information of a nearby object of the target vehicle.
And 102, determining a blind area of the adjacent object based on the object information of the adjacent object.
Step 1035, obtaining a spatial relationship between the target vehicle and the nearby object.
And 1036, determining the risk level of the blind area of the adjacent object based on the spatial relationship.
And 1037, adjusting the display state of the blind area of the adjacent object based on the risk level of the blind area of the adjacent object.
In some embodiments, the spatial relationship between the target vehicle and the adjacent object is a relative positional relationship between the target vehicle and the adjacent object in the world coordinate system, and the positional relationship may be directly obtained through the bird's eye view in the above embodiments, or may be determined through measurement of the relative positional relationship between the target vehicle and the adjacent object by an on-board sensor.
In some embodiments, it may be desirable to categorize the risk level of a blind zone adjacent to an object, with a high risk level blind zone being shown in a conspicuous or highlighted color, and the risk level of the blind zone being determined by the relative location of the target vehicle and the blind zone. It will be appreciated that the risk level of blind areas close to the target vehicle is higher, while the risk level of blind areas further from the target vehicle is relatively lower.
In some embodiments, when the target vehicle approaches a blind area near the object, the risk level of the blind area may be considered to be higher, and the display state of the blind area may be adjusted to prompt the driver. Illustratively, the target vehicle is located on the left side of a truck, where the left dead zone of the truck is a higher risk level area for the target vehicle and the other dead zones have lower risk levels, so the left dead zone of the truck may be highlighted to alert the driver that the current left dead zone of the truck is higher risk.
Through the embodiment, the danger level of the blind area is determined based on the relative positions of the target vehicle and the blind area, and the blind area with higher danger level is displayed in a more striking mode, so that a driver can pay attention to the blind area more easily. The blind area with relatively low danger level can be properly reduced in the striking degree, the interference to the vision of a driver is avoided, the display mode of the blind area is changed aiming at the blind areas with different danger levels, the pertinence of the blind area display is improved, and a better prompting effect is achieved.
The embodiment of the application further provides a blind area determining device based on the foregoing embodiment, and fig. 9 is a schematic structural diagram of the blind area determining device provided by the embodiment of the application. The blind area determining device comprises an acquisition module 201, a determining module 202 and a display module 203.
The acquiring module 201 is configured to acquire object information of a nearby object of the target vehicle;
the determining module 202 is configured to determine a blind area of the nearby object based on object information of the nearby object;
the display module 203 displays the blind area of the adjacent object.
In some embodiments, the nearby objects include vehicle objects; the object information comprises the position, the orientation and the vehicle type of a vehicle object; the determining module 202 is further configured to determine a blind area of the vehicle object based on a position, an orientation, and a vehicle type of the vehicle object; the blind area of the vehicle object is an area that cannot be observed by a driver of the vehicle object.
In some embodiments, the nearby objects include obstacle objects; the object information includes an obstacle boundary of the obstacle object; the determining module 202 is further configured to determine a preset blind area of the obstacle object based on the obstacle boundary; the preset blind area represents an area which cannot be observed by other objects due to shielding of the obstacle.
In some embodiments, the nearby objects include pedestrian objects; the object information includes a line-of-sight range of a pedestrian object; the determining module 202 is further configured to determine a blind area of the pedestrian object based on the line of sight range of the pedestrian.
In some embodiments, the display module 203 is further configured to display a bird's eye view and/or display a second image corresponding to a blind area of the adjacent object at a target position of a display panel; the bird's eye view comprises a first image of the target vehicle and a second image corresponding to a blind area of the adjacent object, and the target position of the display panel is the position of the adjacent object in the real scene, which is observed by a driver of the target vehicle through the display panel.
In some embodiments, the blind area determining apparatus further includes an adjustment module for displaying the blind areas of different adjacent objects in different colors in case of including at least two adjacent objects and/or displaying the different types of sub-blind areas in different colors in case that the blind areas of the adjacent objects include the different types of sub-blind areas.
In some embodiments, the adjustment module is further configured to obtain a spatial relationship between the target vehicle and the nearby object; determining a risk level of a blind zone of the nearby object based on the spatial relationship; and adjusting the display state of the blind area of the adjacent object based on the risk level of the blind area of the adjacent object.
In some embodiments, the blind area determining device further includes a prompt module, where the prompt module is configured to obtain a position of the target vehicle; determining the relative position of the target vehicle and the blind area of the adjacent object based on the position of the target vehicle and the blind area of the adjacent object; and sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects.
The embodiment of the application also provides a vehicle based on the embodiment. Fig. 10 is a schematic structural diagram of a vehicle according to an embodiment of the present application. The vehicle comprises a vehicle body 301, an acquisition module 302, a determination module 303 and a presentation module 304.
The acquiring module 302 is configured to acquire object information of a nearby object of the target vehicle;
the determining module 303 is configured to determine a blind area of the nearby object based on object information of the nearby object;
the display module 304 displays the blind area of the nearby object.
In some embodiments, the nearby objects include vehicle objects; the object information comprises the position, the orientation and the vehicle type of a vehicle object; the determining module 303 is further configured to determine a blind area of the vehicle object based on a position, an orientation, and a vehicle type of the vehicle object; the blind area of the vehicle object is an area that cannot be observed by a driver of the vehicle object.
In some embodiments, the nearby objects include obstacle objects; the object information includes an obstacle boundary of the obstacle object; the determining module 303 is further configured to determine a preset blind area of the obstacle object based on the obstacle boundary; the preset blind area represents an area which cannot be observed by other objects due to shielding of the obstacle.
In some embodiments, the nearby objects include pedestrian objects; the object information includes a line-of-sight range of a pedestrian object; the determining module 303 is further configured to determine a blind area of the pedestrian object based on the line of sight range of the pedestrian.
In some embodiments, the display module 304 is further configured to display a bird's eye view and/or display a second image corresponding to a blind area of the adjacent object at a target position of a display panel; the bird's eye view comprises a first image of the target vehicle and a second image corresponding to a blind area of the adjacent object, and the target position of the display panel is the position of the adjacent object in the real scene, which is observed by a driver of the target vehicle through the display panel.
In some embodiments, the blind area determining apparatus further includes an adjustment module for displaying the blind areas of different adjacent objects in different colors in case of including at least two adjacent objects and/or displaying the different types of sub-blind areas in different colors in case that the blind areas of the adjacent objects include the different types of sub-blind areas.
In some embodiments, the adjustment module is further configured to obtain a spatial relationship between the target vehicle and the nearby object; determining a risk level of a blind zone of the nearby object based on the spatial relationship; and adjusting the display state of the blind area of the adjacent object based on the risk level of the blind area of the adjacent object.
In some embodiments, the blind area determining device further includes a prompt module, where the prompt module is configured to obtain a position of the target vehicle; determining the relative position of the target vehicle and the blind area of the adjacent object based on the position of the target vehicle and the blind area of the adjacent object; and sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects.
The description of the apparatus and vehicle embodiments above is similar to that of the method embodiments above, with similar benefits as the method embodiments. In some embodiments, the apparatus and the vehicle provided by the embodiments of the present application may be used to perform the method described in the above method embodiments, and for technical details not disclosed in the embodiments of the apparatus of the present application, reference should be made to the description of the embodiments of the method of the present application.
It should be noted that, in the embodiment of the present application, if the above-mentioned operation mode updating method is implemented in the form of a software function module, and sold or used as a separate product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or some of contributing to the related art may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific hardware, software, or firmware, or any combination of hardware, software, and firmware.
The embodiment of the application provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs some or all of the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
Embodiments of the present application provide a computer program comprising computer readable code which, when run in a computer device, causes a processor in the computer device to perform some or all of the steps for carrying out the above method.
Embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, in other embodiments the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It should be noted here that: the above description of various embodiments is intended to emphasize the differences between the various embodiments, the same or similar features being referred to each other. The above description of apparatus, storage medium, computer program and computer program product embodiments is similar to that of method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, the storage medium, the computer program and the computer program product of the present application, reference should be made to the description of the embodiments of the method of the present application.
Fig. 11 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application, as shown in fig. 6, the hardware entity of the computer device 400 includes: a processor 401 and a memory 402, wherein the memory 402 stores a computer program executable on the processor 401, the processor 401 implementing the steps of the method of any of the embodiments described above when executing the program.
The memory 402 stores a computer program executable on the processor, and the memory 402 is configured to store instructions and applications executable by the processor 401, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the processor 401 and the computer device 400, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
The processor 401, when executing a program, implements the steps of the operation mode updating method of any one of the above. The processor 401 generally controls the overall operation of the computer device 400.
An embodiment of the present application provides a computer storage medium storing one or more programs executable by one or more processors to implement the steps of the operation mode updating method of any of the embodiments above.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application.
The processor may be at least one of a target application integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not limited in detail.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Read Only optical disk (Compact Disc Read-Only Memory, CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence number of each step/process described above does not mean that the execution sequence of each step/process should be determined by its functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units. Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A blind zone display method, the method comprising:
acquiring object information of a nearby object of the target vehicle;
Determining a blind area of the adjacent object based on the object information of the adjacent object;
and displaying the blind area of the adjacent object.
2. The method of claim 1, the nearby object comprising a vehicle object; the object information comprises the position, the orientation and the vehicle type of a vehicle object; the determining the blind area of the adjacent object based on the object information of the adjacent object includes:
determining a blind area of the vehicle object based on the position, the orientation and the vehicle type of the vehicle object; the blind area of the vehicle object is an area that cannot be observed by a driver of the vehicle object.
3. The method of claim 1, the nearby object comprising an obstacle object; the object information includes an obstacle boundary of the obstacle object; the determining the blind area of the adjacent object based on the object information of the adjacent object includes:
determining a preset blind area of the obstacle object based on the obstacle boundary; the preset blind area represents an area which cannot be observed by other objects due to shielding of the obstacle.
4. The method of claim 1, the nearby object comprising a pedestrian object; the object information includes a line-of-sight range of a pedestrian object; the determining the blind area of the adjacent object based on the object information of the adjacent object includes:
And determining the blind area of the pedestrian object based on the sight line range of the pedestrian.
5. The method of claim 1, wherein the displaying the blind area of the nearby object comprises at least one of:
displaying a bird's eye view, wherein the bird's eye view comprises a first image of the target vehicle and a second image corresponding to a blind area of the adjacent object;
and displaying a second image corresponding to the blind area of the adjacent object at a target position of a display panel, wherein the target position of the display panel is a position at which a driver of the target vehicle observes the adjacent object in a real scene through the display panel.
6. The method according to any one of claims 1 to 5, comprising at least one of:
in the case of comprising at least two adjacent objects, the blind areas of different adjacent objects are displayed in different colors;
in the case where the blind areas of the adjacent objects include different types of sub-blind areas, the different types of sub-blind areas are displayed in different colors.
7. The method of any one of claims 1 to 5, further comprising, after said displaying the blind zone of the nearby object:
acquiring the position of the target vehicle;
determining the relative position of the target vehicle and the blind area of the adjacent object based on the position of the target vehicle and the blind area of the adjacent object;
And sending prompt information based on the relative positions of the target vehicle and the blind areas of the adjacent objects.
8. The method of any one of claims 1 to 5, further comprising, after said displaying the blind zone of the nearby object:
acquiring a spatial relationship between the target vehicle and the adjacent object;
determining a risk level of a blind zone of the nearby object based on the spatial relationship;
and adjusting the display state of the blind area of the adjacent object based on the risk level of the blind area of the adjacent object.
9. A blind zone determination device, the device comprising:
the acquisition module is used for acquiring object information of an adjacent object of the target vehicle;
the determining module is used for determining the blind area of the adjacent object based on the object information of the adjacent object;
and the display module is used for displaying the blind areas of the adjacent objects.
10. A vehicle, comprising:
a vehicle body;
the acquisition module is used for acquiring object information of an adjacent object of the target vehicle;
the determining module is used for determining the blind area of the adjacent object based on the object information of the adjacent object;
and the display module is used for displaying the blind areas of the adjacent objects.
CN202311100365.7A 2023-08-29 2023-08-29 Blind area display method and vehicle Pending CN117022117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311100365.7A CN117022117A (en) 2023-08-29 2023-08-29 Blind area display method and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311100365.7A CN117022117A (en) 2023-08-29 2023-08-29 Blind area display method and vehicle

Publications (1)

Publication Number Publication Date
CN117022117A true CN117022117A (en) 2023-11-10

Family

ID=88639777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311100365.7A Pending CN117022117A (en) 2023-08-29 2023-08-29 Blind area display method and vehicle

Country Status (1)

Country Link
CN (1) CN117022117A (en)

Similar Documents

Publication Publication Date Title
US10909765B2 (en) Augmented reality system for vehicle blind spot prevention
EP3184365B1 (en) Display device for vehicle and control method thereof
CN110228416B (en) Early warning system and method based on driver turning visual blind area detection
US20110228980A1 (en) Control apparatus and vehicle surrounding monitoring apparatus
JP4872245B2 (en) Pedestrian recognition device
CN108482367B (en) Method, device and system for assisting driving based on intelligent rearview mirror
US20160110618A1 (en) Information processing device, approaching object notification method, and program
US20130300872A1 (en) Apparatus and method for displaying a blind spot
US20190135169A1 (en) Vehicle communication system using projected light
JP5888339B2 (en) Display control device
CN111273765A (en) Display control device for vehicle, display control method for vehicle, and storage medium
KR20180065527A (en) Vehicle side-rear warning device and method using the same
CN109987025B (en) Vehicle driving assistance system and method for night environment
WO2023284748A1 (en) Auxiliary driving system and vehicle
US20220292686A1 (en) Image processing apparatus, image processing method, and computer-readable storage medium storing program
CN216184804U (en) Driving assistance system and vehicle
CN117022117A (en) Blind area display method and vehicle
US10864856B2 (en) Mobile body surroundings display method and mobile body surroundings display apparatus
JP2021149641A (en) Object presentation device and object presentation method
US7408478B2 (en) Area of representation in an automotive night vision system
CN108297691B (en) Method and system for providing notifications on a camera display of a vehicle
US20230141584A1 (en) Apparatus for displaying at least one virtual lane line based on environmental condition and method of controlling same
JP7371419B2 (en) Driving assistance systems and programs
WO2023221118A1 (en) Information processing methods and apparatus, electronic device, and storage medium
JP2017224067A (en) Looking aside state determination device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination