CN115904156A - Display method, electronic device and vehicle - Google Patents

Display method, electronic device and vehicle Download PDF

Info

Publication number
CN115904156A
CN115904156A CN202211321463.9A CN202211321463A CN115904156A CN 115904156 A CN115904156 A CN 115904156A CN 202211321463 A CN202211321463 A CN 202211321463A CN 115904156 A CN115904156 A CN 115904156A
Authority
CN
China
Prior art keywords
vehicle
scene
presenting
lane
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211321463.9A
Other languages
Chinese (zh)
Inventor
李青
王睿
侯珩
寿心悦
黄莹
王璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202211321463.9A priority Critical patent/CN115904156A/en
Publication of CN115904156A publication Critical patent/CN115904156A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application provides a display method, electronic equipment and a vehicle. Wherein the vehicle model and the scene are displayed on a user interface; the vehicle model is obtained by rendering based on the appearance of the vehicle, and the scene is obtained by rendering based on the environmental information of the vehicle; and presenting a plurality of visual elements at corresponding positions of the scene according to the driving state of the vehicle, wherein the visual elements comprise various elements of navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface movement units, side effects and symbolic elements. According to the technical scheme, the scene and the vehicle model are displayed in the user interface, the plurality of visual elements are presented in the scene according to the driving state of the vehicle, the purpose of prompting the user environment information and the driving state is achieved through abundant visual effects, the user can conveniently identify important information and the like in a visual layer, and the prompting effectiveness is improved compared with a voice mode.

Description

Display method, electronic device and vehicle
Technical Field
The embodiment of the application relates to the technical field of vehicles, in particular to a display method, electronic equipment and a vehicle.
Background
In the related art, the environmental information around the vehicle can be sensed by using a detection device such as a sensor, and the user can be warned to improve the driving safety, however, at present, the voice prompt is mostly performed by using a warning sound, and the prompt effect is general.
Disclosure of Invention
The embodiment of the application provides a display method, electronic equipment and a vehicle, and aims to solve the problem that the prompt effect is general in the prior art.
In a first aspect, an embodiment of the present application provides a display method, including:
displaying the vehicle model and the scene on a user interface; the vehicle model is obtained by rendering based on the appearance of the vehicle, and the scene is obtained by rendering based on the environment information where the vehicle is located;
and presenting a plurality of visual elements at corresponding positions of the scene according to the driving state of the vehicle, wherein the visual elements comprise a plurality of elements of navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface movement units, additional effects and symbolic elements.
In a second aspect, an embodiment of the present application provides an electronic device, including a storage component, a display component, and a processing component; the storage component stores one or more computer instructions for the processing component to invoke and execute to implement the display method of the first aspect.
In a third aspect, embodiments of the present application provide a vehicle, which includes a vehicle body and the electronic device according to the second aspect, which is disposed in the vehicle body.
The embodiment of the application provides a display method, display equipment and a vehicle. Wherein the vehicle model and the scene are displayed on a user interface; the vehicle model is rendered based on the appearance of the vehicle, and the scene is rendered based on the environment information of the vehicle; and presenting a plurality of visual elements at corresponding positions of the scene according to the driving state of the vehicle, wherein the plurality of visual elements comprise various elements of navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface movement units, additional effects and symbolic elements. According to the technical scheme, the scene and the vehicle model are displayed in the user interface, the plurality of visual elements are presented in the scene according to the driving state of the vehicle, the purpose of prompting the user environment information and the driving state is achieved through abundant visual effects, the user can conveniently identify important information and the like in a visual layer, and the prompting effectiveness is improved compared with a voice mode.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following descriptions are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 illustrates a flow chart of one embodiment of a display method provided herein;
FIG. 2a is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 2b is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 3a is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 3b is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 3c is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 3d is a schematic diagram illustrating one embodiment of a user interface provided in the present application;
FIG. 3e is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 4 is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 5 is a schematic diagram illustrating one embodiment of a user interface provided herein;
FIG. 6 is a schematic diagram illustrating one embodiment of a user interface provided by the present application
FIG. 7 illustrates a schematic diagram of one embodiment of a display device provided herein;
FIG. 8 illustrates a schematic diagram of one embodiment of an electronic device provided herein;
FIG. 9 illustrates a schematic view of one embodiment of a vehicle provided herein.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 shows a flowchart of an embodiment of a display method provided in the present application, and a technical solution of the embodiment may be executed by an electronic device disposed in a vehicle, and the electronic device may integrate multiple functions of positioning, communication, display, detection, and the like to implement various controls and the like for the vehicle, where the vehicle referred to herein may refer to a vehicle, an airplane, a ship, a land-air dual-purpose aircraft, and the like, and the vehicle may be a large, medium, small, and medium-sized vehicle such as an automobile, a train, and the like. For convenience of understanding, in one or more embodiments described below, the vehicle is mainly taken as an example for explanation, but the technical solution of the present application is not limited thereto.
The method may include the steps of:
101: the vehicle model and the scene are displayed at a user interface.
The user interface is a display interface provided by the electronic device, and the user interface is also a vehicle-mounted display interface under the condition that the electronic device is a vehicle-mounted device. The user interface may be a Virtual Reality (VR) interface, an Augmented Reality (AR) interface, or an interface presented by a display screen of the electronic device, which is not limited in this application.
The vehicle model is obtained by rendering based on the appearance of a vehicle, the scene is obtained by rendering based on detected environment information of the current environment where the vehicle is located, and a real environment state can be reflected through the scene, for example, the scene may include lanes, lane lines, surrounding buildings and the like, wherein the environment information may be pre-configured with corresponding rendering data, and then the scene of the current environment can be generated by rendering.
The vehicle is a vehicle, and the vehicle model is specifically a vehicle model. The vehicle model may be displayed at corresponding locations in the scene according to the environmental location of the vehicle.
In order to improve the visual effect, the scene may be a three-dimensional scene or the like.
102: presenting a plurality of visual elements at respective locations of the scene according to a driving state of the vehicle.
Wherein the plurality of visual elements may include a plurality of elements among navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface movement units and side effects, symbolic elements.
The multiple elements can be borne on different layers and overlapped together to form auxiliary visual content, and then the auxiliary visual content can be overlapped into a scene in an overlapping mode, so that fusion display of the multiple elements, the scene and a vehicle model is achieved.
The multiple visual elements, scenes and models can be fused to generate a display screen, and then the display screen is displayed on the display interface.
Wherein, the stack order of a plurality of visual elements can be in proper order from bottom to top: navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface movement units and accompanying effects, symbolic elements, and the like.
The navigation elements may include information for prompting a lane, or recommending a lane, etc., such as a lane guide element pointing from a head position to a driving direction, a linear guide element displayed at a lane line position of a driving lane, etc.; the auxiliary virtual element may be a virtual form based on the environment object, and represents a change of a state of the environment object, for example, a parking space prompt element; the driving assistance guide element may be an element that plays a guiding role on the travel route in the driving assistance state, or the like; the road surface movement units and side effects may be elements used to represent dynamically perceived objects, such as pedestrians, other vehicles, animals, etc.; the symbolic elements may include icon-like elements for symbolically representing the vehicle, the environmental state, and the like, on the premise of conforming to the user's knowledge of the existing basic traffic sign and the own vehicle function, such as speed limit information, current speed per hour, an electronic eye prompt sign, an upgrade or downgrade prompt sign between driving modes, and the like. Specific implementations of the different visual elements are also illustrated below, and it should be noted that the present application is not limited thereto.
The types of elements corresponding to different driving states and the specific display states of the elements may be different, and the display states may include colors, shapes, sizes, dynamic displays, static displays, and the like, which is not limited in the present application.
According to the embodiment, the scene and the vehicle model are displayed in the user interface, the plurality of visual elements are presented in the scene according to the driving state of the vehicle, the purpose of prompting the environmental information and the driving state of the user is achieved through abundant visual effects, the user can conveniently identify important information and the like in a visual layer, and the prompting effectiveness is improved compared with a voice mode. The scene, the model and the visual elements can be represented in a three-dimensional form, so that the visual effect can be further improved, and a better information immersion feeling is provided for a user.
In some embodiments, since the prompt levels corresponding to different driving states may be different, presenting the plurality of visual elements at the respective positions of the scene according to the driving state of the vehicle may include:
determining a corresponding prompt level according to the driving state of the vehicle;
and presenting a plurality of corresponding visual elements at corresponding positions of the scene according to the driving state and the prompt level.
The current vehicle can have a plurality of driving modes such as a manual driving mode, an auxiliary driving mode, an automatic driving mode and the like, the driving state can be determined based on detection data of various sensors or manual operation data or program setting and the like, the driving operation currently executed by the vehicle can be determined according to the driving state and the like, and then different visual prompts can be carried out.
For convenience of understanding, the following describes the technical solution of the present application with a vehicle as a vehicle and with reference to a plurality of driving state scenarios. In various implementations listed below, different driving states may only describe one or more visual elements, but it is understood that other visual elements corresponding to the current driving state may also be presented in the scene, and the visual elements corresponding to the different driving states may be configured in advance, and when the vehicle is sensed to be in a certain driving state, a corresponding display screen is presented in the user interface.
In one driving state scene, presenting a plurality of visual elements at respective positions of the scene according to the driving state of the vehicle may include:
and determining that the vehicle is in a lane switching mode according to the driving state of the vehicle, and presenting corresponding switching guide elements at corresponding positions in the scene.
The lane switching mode may include a merging mode or a lane changing mode, and in the lane switching mode, a lane change may occur, and the vehicle may be switched from one lane to another lane. The switching guide element can be a navigation element, can be used for prompting a target lane needing to be switched, and the like.
Since switching from one lane to another requires a process in the case of a vehicle in lane switching mode, switching guide elements are used for prompting purposes during the switching process, in some embodiments, determining that the vehicle is in lane switching mode according to the driving state of the vehicle, and presenting corresponding switching guide elements at corresponding positions in the scene includes:
determining that the vehicle performs a lane switching operation according to the driving state of the vehicle, and presenting a first switching guide element pointing to a target lane at a front position of the vehicle model in the scene under the condition that the distance between the driving position of the vehicle and the switching end point is less than a first preset distance;
presenting a second switching guide element pointing to a target lane at a position ahead of the vehicle model in the scene within a second predetermined distance of or a predetermined time after the vehicle travel position reaches the switching end point and exceeds the switching end point.
The first switching guide element and the second switching guide element may be dynamic elements, and present a dynamic visual effect, where the first switching guide element and the second switching guide element have different display states, such as different colors, different shapes, and the like.
The front position of the vehicle can be close to the head position, and in the driving process of the vehicle, the first switching guide element and the second switching guide element can achieve the visual effect of displaying along with the change of the position of the vehicle.
By the first switching guide element, a certain distance can be extracted to remind a user to switch lanes and the like before the vehicle backs to switch the terminal,
when the vehicle reaches the switching end point, the first switching guide element can be updated to the second switching guide element, the time sequence can be carried out for a certain time length or distance, the user is reminded to select whether to carry out the line combination according to the self requirement, the system can re-damage the driving route and the like at the moment, and after the second preset distance is exceeded or the preset time is exceeded, the vehicle can be controlled to drive according to the new route and the like.
Fig. 2a and 2b are schematic interface diagrams showing a vehicle model, a first switching guide element, and a second switching guide element, but the present invention is not limited thereto.
In addition, other objects may exist in the target lane, such as other vehicles (other vehicles than the own vehicle), pedestrians and animals, and the like. Therefore, in some embodiments, the corresponding warning prompt element may be presented in the scene in combination with at least one determination result of whether the first object is present in the target lane, whether the target lane is abnormal, and whether the lane line is a solid line.
The early warning prompting element can comprise a road surface moving unit and an additional effect, an auxiliary driving guide element, and/or an auxiliary virtual element and the like, and is used for prompting the object in the current environment of the user, the danger level and the like. The lane line refers to a lane line between the current lane and the target lane, and may be a solid line, a dotted line, or a null line, that is, there is no lane line.
In one implementation, the presenting, in the scene, a corresponding warning suggestion element in combination with at least one determination result of whether the first object exists in the target lane, whether the target lane is abnormal, and whether the lane line is a solid line may include:
if the target lane has a first vehicle object, presenting a first object model at a position corresponding to the first object in the scene; in addition, a first placeholder element can be presented at a switching position corresponding to the target lane.
If the first object does not exist in the target lane, presenting a second placeholder element at a switching position corresponding to the target lane in the scene; the display state of the first placeholder element is different from the display state of the second placeholder element;
if the target lane is abnormal, presenting an abnormal prompt element at a corresponding position of the target lane;
and if the lane line is a solid line, presenting a first linear prompt element at the position corresponding to the lane line.
The first place-occupying element and the second place-occupying element may be in a designated shape, for example, a rectangular shape, and the like, and are used to represent a corresponding position after the vehicle switches to the target lane, and may belong to an auxiliary virtual element, and the different display states may refer to, for example, different colors, and the like.
The first object model is a road surface movement unit and is used for representing the perceived first object, and the first object model can be provided with other specific colors, such as red, which are distinguished from the vehicle model, so as to play a warning effect.
The abnormality prompting element may be overlaid on a lane area of the target lane displayed in the user interface, which may be a specific color, such as red, so as to achieve a display effect of marking the target lane red to play a warning effect, and at this time, optionally.
Optionally, two lane line positions of the current lane where the lane is located may present a linear guide element to guide and prompt the current lane where the vehicle is located, and in the case that the lane line close to the target lane is a solid line, the first linear prompt element may be presented at the lane line position. The first linear prompt element and the linear guide element are in different display states. For example, the first linear guide element may be a bar of a specific color and a specific thickness, and the first linear guide element may be obtained by changing the color of the linear guide element.
The first linear prompt element can be used for prompting that lanes cannot be switched due to the solid line problem, and the like, and the first linear prompt element can be displayed for a certain time period and then can be cancelled to be displayed and returned to the initial state.
It should be noted that, whether the first object exists in the target lane, the target lane exception, and the lane line may exist at the same time for implementation, so that the first object model, the place-occupying element, the exception prompting element, and the first linear prompting element may present one or more of them at the same time.
Furthermore, due to size limitations of the user interface and the like, the first object may not be presented in the interface in the case that the target lane exists but is far away, and therefore, in some embodiments, if the first object exists in the target lane, presenting the first object model in the scene at a position corresponding to the first object may include:
if the corresponding position of the first object is located outside the user interface, presenting a first early warning element at the boundary position close to the first object in the scene;
if the first object exists in the target lane and the corresponding position of the first object is located in the user interface, a first object model is presented at the corresponding position of the first object in the scene, and a dynamic visual effect of the first early warning element from the existence to the nonexistence is presented at the boundary position close to the first object.
The first early warning element can belong to a road surface moving unit and an additional effect and is used for achieving the purpose of early warning, and the first early warning element can present the dynamic light-emitting effect of a breathing lamp and the like.
In addition, the corresponding position of the first object is located in the user interface, warning coincidence and the like can be superposed at the position of the first object, and early warning prompt and the like can be further carried out.
The first object corresponding position may be located outside the user interface, and the first object corresponding position may be located inside the user interface, or the first object corresponding position may be located inside the view field corresponding to the current display screen of the user interface.
For easy understanding, fig. 3a to 3d respectively show interface schematic diagrams of visual elements presented in the cases that the first object exists in the target lane and the corresponding position of the first object is located outside the user interface, the first object exists in the target lane and the corresponding position of the first object is located inside the user interface, the target lane is abnormal, and the lane line is a solid line, but the application is not limited thereto.
In fig. 3a, the vehicle is located outside the current view field of the user interface, in a scene 30 displayed on the user interface, a vehicle model and an expected vehicle position of the target lane are presented, a first place-occupying element is presented, a first early warning element is presented at a position where the vehicle approaches, the first early warning element may be a luminous breathing effect, and a warning sound and the like may be output to further prompt the user; in fig. 3b, the other vehicle enters the current view of the user interface, the vehicle model, the other vehicle model, the first place-occupying element, and the like are presented in the scene, and the first warning element disappears.
In fig. 3c, a vehicle model, placeholder elements (first placeholder element or second placeholder element), exception cue elements, etc. are presented in the scene of the user interface display.
In fig. 3d, a vehicle model, a first linear prompt element, etc. is presented in the scene of the user interface display.
In addition, in the above-mentioned several schematic diagrams, linear guide elements may be presented at two lane line positions of the current lane, and the linear guide elements may present corresponding display states and the like according to whether the lane lines are solid lines or dashed lines.
In addition, when the vehicle performs the lane switching operation, or when the target lane changes from abnormal to normal, and the lane line changes from a solid line to a broken line, and the lane switching operation can be continuously performed, if the center point of the vehicle model exceeds the lane line, the linear guide element at the left side position of the original lane may disappear, and at the same time, the linear guide element is superimposed at the right side lane line position of the target lane, so that the vehicle model is located in the target lane, the placeholder element disappears, and the lane lines at both sides of the target lane normally display the linear guide element.
In addition, lane guide line elements pointing from the vehicle head position to the driving direction may also be presented at the vehicle head position, which may be dynamically changed following the vehicle steering.
In addition, since the vehicle performs a steering operation when performing a lane switching operation, a turn light prompting element with a light-emitting effect may be presented at a head position in the scene, and a first detection prompting element pointing to the first object model and being a detection waveform may be presented in a detection region of the vehicle model.
The indicator light prompting element may be presented at a vehicle head position facing the steering direction, and specifically, the indicator light prompting element may be presented at a vehicle light position in the vehicle head.
The indicator light prompt element and the first detection prompt element can both belong to an auxiliary driving guide element and the like, and of course, the indicator light prompt element and the first detection prompt element can belong to other element types, so that the indicator light prompt element and the first detection prompt element can be presented in different layers, so that the indicator light flicker is presented when the vehicle performs lane switching operation in the road surface driving process, the target lane is occupied by the occupied element to represent an expected parking space, the other vehicles enter the picture from the outside of the picture, the own vehicle can generate detection waves, and meanwhile, the other vehicles can be marked with specific colors and can be superposed with warning signs and the like.
For ease of understanding, as in the interface diagram shown in fig. 3e, the vehicle model, the other vehicle model, the head position of the vehicle model in the presence of the turn light prompt element, and the first detection prompt element in the rear position of the vehicle are presented in the scene 30 displayed in the user interface, although the application is not limited thereto.
Alternatively, the turn light indicator element may be in the shape of an arrow pointing in the switching direction.
In yet another implementation, in the case that the first placeholder element or the second placeholder element is presented in the scene, the method may further comprise:
and presenting an arrow indication element pointing to the switching direction at the front end position of the head corresponding to the first occupancy element or the second occupancy element.
The arrows indicate that the elements are ready to represent turns etc.
In yet another driving state scenario, the presenting a plurality of visual elements at respective positions of the scenario according to the driving state of the vehicle may comprise:
and presenting a second linear prompt element at a position corresponding to the lane line in the scene under the condition that the vehicle deviates from the lane line according to the running state of the vehicle.
The vehicle may be determined to be deviated from the lane line when it is determined that the distance between the vehicle position and the lane line is less than a certain distance based on the detection data of the traveling state of the vehicle.
In order to prompt the vehicle to depart from the lane, a second linear prompt element may be presented at a position corresponding to the lane line. The lane line refers to a lane line on the side from which the vehicle deviates.
The second linear prompt element may be a specific color such as red, a specific thickness, and may present a dynamic effect such as a flashing or bar-shaped element of a breathing light, etc., although the present application is not limited thereto.
In one possible interface diagram, shown in fig. 4, a vehicle model is presented in a scene displayed by a user interface, and a second linear prompt element is presented at a lane line position offset by the vehicle.
In addition, if the vehicle is in the automatic driving mode, the automatic driving program may correct the lane departure when detecting that the vehicle has the lane departure, and therefore, in some embodiments, the method may further include:
in the case where the vehicle deviates from a lane line and the vehicle is in an autonomous driving mode, correcting a travel trajectory of the vehicle to ensure that the vehicle does not deviate from the lane line.
In the automatic driving mode, the second linear prompt element may be a wall-shaped prompt element, and the wall-shaped prompt element may have a specific shape, such as a height close to the center of the vehicle, and gradually decreases from the position close to the center of the vehicle to a height close to the front position and a height close to the rear position of the vehicle, respectively, so as to present a display effect that an AI wall interception occurs near a lane line during the driving of the vehicle.
In yet another driving state scenario, the presenting a plurality of visual elements at respective positions of the scenario according to the driving state of the vehicle may comprise:
according to the driving state of the vehicle, determining that a second object exists in a blind area of the vehicle, presenting a second object model at a position corresponding to the blind area in the scene, and synthesizing a second detection prompt element presenting a detection waveform at a detection area position of the vehicle model. The detection area may be specifically a blind area of the vehicle, or the like.
The second object model may belong to a road surface moving unit and an additional effect, and the second detection prompt element may belong to a driving assistance guide element and the like.
Wherein the second object may be his car, a pedestrian or an animal, etc.
The second object model may be a specific color different from the vehicle model, such as red, to enhance the warning effect. The specific implementation of the second detection hint element can refer to the display state described above for the first detection hint element, and so on.
Referring to one possible interface diagram shown in FIG. 5, a vehicle model, a second object model (other vehicle model), and a second detection prompt element are presented in a scene displayed by a user interface.
In yet another driving state scenario, the presenting a plurality of visual elements at respective positions of the scenario according to the driving state of the vehicle may comprise:
according to the running state of the vehicle, under the condition that a target object exists at a position right in front of the vehicle, a position laterally in front of the vehicle, a position right behind the vehicle or a position laterally behind the vehicle, a target object model in a corresponding display state is presented at a position corresponding to the target object in the scene; and the display states of the target object models corresponding to different positions are different.
The target object may be another vehicle, or a pedestrian, etc. The following are described separately:
the other vehicle object exists at the position right in front of the vehicle, and the other vehicle position can present the other vehicle model with a specific color, such as red, which is different from the color of the vehicle model in the user interface.
In addition, the user interface described herein may include at least a first display area and a second display area, may be to display the scene and the vehicle model in the first display area,
optionally, better prompt may be performed, a text prompt element may be presented in the scene, and the text prompt element may be carried in the second display area, and the second display area may also be presented with a target color, such as red.
Wherein the second display area may be displayed superimposed on the first display area, etc.
In addition, different driving states can correspond to different prompt levels, in the driving state scene, the corresponding prompt level and the like can be determined according to the distance between the vehicle and the target object, and when the prompt level reaches a specified level, namely the distance between the vehicle and the target object is smaller than a preset distance, the third display area can be marked as a target color and the like, and then the elements are prompted in characters in the scene.
For example, for a position right in front of the vehicle, the vehicle is in a front following state, and at this time, the distance from the other vehicle object may cause whether a collision occurs, and the method may further include:
and presenting a third early warning element at a position close to an upper boundary of the user interface in the scene and at least one of text prompt elements in the user interface according to the distance between the vehicle and the target object.
Through the third early warning element, the character prompting element, the target color and the like, the user can be prompted to brake or decelerate so as to keep a safe distance and the like.
The third warning element may be a visual element in a specific color, exhibiting a flashing or breathing light effect.
In one possible interface diagram shown in fig. 6, a vehicle model, a target object model, a third warning element, a text prompt element, and the like are presented in a scene displayed in a user interface.
1) The other vehicle object exists at the front position of the vehicle side, and the other vehicle position can present the other vehicle model with a specific color, such as red, which is different from the color of the vehicle model in the user interface.
For example, when the vehicle passes through the intersection position, the other vehicle object which moves straight ahead at the side front position can present other vehicle models with specific colors different from the colors of the vehicle models, so as to realize the purpose of warning.
Further, the vehicle braking or deceleration, etc. may also be controlled.
2) In the case where another vehicle object exists at a position directly behind the vehicle, another vehicle model having a specific color different from the color of the vehicle model can be presented.
If the other vehicle object is outside the user interface, a fourth warning element and the like can be presented at the position, close to the other vehicle object, in the scene, and the fourth warning element can be in a specific color and present a visual element with a flashing or breathing lamp effect.
If the other vehicle object is located within the user interface, the other vehicle model may be re-presented.
3) The pedestrian object or the other vehicle object exists at the vehicle lateral rear position, and a pedestrian model or an other vehicle model with a specific color can be presented.
In addition, a look-around model of the vehicle model and an environmental image can be presented in the user interface; the all-round view model is generated based on the appearance information of the bird's-eye view angle of the vehicle in a rendering mode; the environment image is obtained by rendering based on surrounding environment data obtained by detection of the vehicle.
In yet another driving state scenario, the presenting a plurality of visual elements at respective positions of the scenario according to the driving state of the vehicle may comprise:
and determining that the vehicle is switched to an automatic driving mode according to the driving state of the vehicle, and presenting wall-shaped prompt elements at the position of the head of the vehicle in the scene.
Optionally, to add the visual effect, it may specifically be to present a wall-shaped prompt element at a head position in the scene, and present the wall-shaped prompt information with a dynamic increase effect of changing from a first size to a second size, and present a dynamic change effect of changing from a predetermined color to a transparent color after increasing to the second size; wherein the second size is greater than the first size.
Furthermore, it is also possible to present a lane guidance element pointing in the direction of travel at the head position in the scene.
Alternatively, in order to further increase the visual effect and improve the prompt effectiveness, the lane guide element may be composed of a linear element and a plurality of ring-shaped elements surrounding the linear element and distributed at a predetermined distance on the linear element.
Further, in some embodiments, the method may further comprise: detecting a control event aiming at the vehicle, and determining corresponding target mirror moving parameters; and switching the scene and the vehicle model into a display state corresponding to the target mirror operating parameter.
The target mirror moving parameters may include a view angle size, a view angle height, a focal point, a focal length, and the like.
In some embodiments, the switching the scene and the vehicle model from the current mirror operation parameter to the display state corresponding to the target mirror operation parameter may include: displaying display content corresponding to a transition process of switching from the current mirror moving parameter to the target mirror moving parameter on the user interface; the display content includes a dynamically changing image of the scene and a dynamically changing image of the vehicle model.
Wherein detecting a control event for the vehicle may include at least one of:
detecting a switch of the vehicle from a first driving mode to a second driving mode;
detecting a switch of the vehicle from a first vehicle state to a second vehicle state;
detecting that the vehicle performs a target driving operation;
detecting a control operation for the vehicle itself;
detecting that the vehicle is in a target road condition environment; detecting that the vehicle meets corresponding early warning conditions, and determining second mirror moving parameters corresponding to the early warning conditions;
and the number of the first and second groups,
user adjustment operations for current mirror movement parameters are detected.
The implementation process of detecting the first control event of the vehicle may be detecting a touch operation of a user on a preset key, or detecting a touch operation of a preset virtual key on a display interface, or detecting that a driving state of the vehicle meets a trigger condition of the first control event, or detecting a gesture operation of the user, and the like.
The driving modes may include: the first driving mode and the second driving mode may be any one of an assist driving mode, a manual driving mode, and an automatic driving mode, and the first driving mode and the second driving mode are different.
The detection of the switching of the vehicle from the first driving mode to the second driving mode may be a detection of a mutual switching of the vehicle from the auxiliary driving mode to the manual driving mode, or a detection of a mutual switching of the vehicle from the auxiliary driving mode to the automatic driving mode, or a detection of a mutual switching of the vehicle from the manual driving mode to the automatic driving mode.
Further, the automatic driving modes include different levels of automatic driving modes, such as an L3 level automatic driving mode, an L4 level automatic driving mode, and the like, the assist driving mode may include a semi-automatic driving mode and an adaptive cruise mode, and the like, and the detection of the switching of the vehicle from the first driving mode to the second driving mode may also be the switching between the above-described driving modes.
The vehicle state may be a parking, driving, parking-out or the like vehicle state,
detecting a switch of the vehicle from the first vehicle state to the second vehicle state may be detecting a mutual switch of the vehicle from parking to driving, or detecting a mutual switch of the vehicle from driving to parking.
The target driving operation may be a turning, lane changing, merging, turning around or the like operation,
the control operation can be door opening, trunk opening, lamp opening, charging and other operations;
operations such as closed road section, congestion and the like can be performed in the target road condition environment;
the condition meeting the corresponding early warning condition can be that the distance between the vehicle and the front vehicle is too close in the advancing process, the distance between the vehicle and the lane line is too close, and the vehicle, the pedestrian or the obstacle is arranged at the rear part in the backing process;
the user adjustment operation for detecting the first mirror movement parameter may be a model interaction operation, a screen zoom operation, a rotation operation, a state in which the vehicle is in a traveling state or a parking state, or the like.
In some embodiments, as can be seen from the above description, the user interface includes a first display area and a second display area;
the displaying the vehicle model and the scene on the user interface comprises:
displaying a vehicle model and a scene in a first display area of the user interface;
the method further comprises the following steps:
and displaying the second display area as a corresponding target color according to the driving state, and/or displaying a text prompt element corresponding to the driving state in the second display area.
According to the prompt level corresponding to the driving state, when the prompt level reaches a preset level, the second display area is presented with a corresponding target color, and/or a text prompt element corresponding to the driving state is displayed in the second display area.
It can be understood that, in order to facilitate the user's perception strength of the prompt level, the second display area may be presented with a corresponding target color when the prompt level reaches a predetermined level, and the prompt level may be distinguished by combining with the text prompt element.
In some embodiments, the method may further comprise:
determining corresponding audio prompt data according to the driving state of the vehicle; and playing the audio prompt data.
Therefore, the purpose of effective prompt is realized in a mode of combining sound and pictures.
In addition to distinguishing the cue levels in combination with color and text, the cue levels may also be distinguished in combination with audio cue data.
The embodiment of the application also provides a display device. Fig. 7 is a schematic diagram illustrating an embodiment of a display device provided in an embodiment of the present application. As shown in fig. 7, the apparatus includes: a module 701 is displayed.
The display module 701 is used for displaying the vehicle model and the scene on a user interface; the vehicle model is obtained by rendering based on the appearance of the vehicle, and the scene is obtained by rendering based on the environment information where the vehicle is located; and presenting a plurality of visual elements at corresponding positions of the scene according to the driving state of the vehicle, wherein the visual elements comprise a plurality of elements of navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface moving units, auxiliary effects and symbolic elements.
Optionally, the display module 701 is specifically configured to determine that the vehicle is in a lane switching mode according to the driving state of the vehicle, and present a corresponding switching guide element at a corresponding position in the scene.
Optionally, the display module 701 is further specifically configured to present, in combination with at least one determination result of whether the first object exists in the target lane, whether the target lane is abnormal, and whether the lane line is a solid line, a corresponding early warning prompt element in the scene.
Optionally, the display module 701 is further specifically configured to determine that the vehicle performs a lane switching operation according to a driving state of the vehicle, and in a case that a distance between a vehicle driving position and the switching end point is smaller than a first predetermined distance, present a first switching guide element pointing to a target lane at a position ahead of the vehicle model in the scene; presenting a second switching guide element pointing to a target lane at a position ahead of the vehicle model in the scene within a second predetermined distance of or a predetermined time after the vehicle travel position reaches the switching end point and exceeds the switching end point.
Optionally, the display module 701 is further specifically configured to enable a first object to exist in the target lane, present a first object model at a position corresponding to the first object in the scene, and present a first placeholder element at a switching position corresponding to the target lane; if the first object does not exist in the target lane, presenting a second placeholder element at a switching position corresponding to the target lane in the scene; the display state of the first placeholder element is different from the display state of the second placeholder element; if the target lane is abnormal, presenting an abnormal prompt element at a corresponding position of the target lane; and if the lane line is a solid line, presenting a first linear prompt element at the position corresponding to the lane line.
Optionally, the display module 701 is further specifically configured to, if the corresponding position of the first object is located outside the user interface, present a first warning element at a boundary position close to the first object in the scene; if the first object exists in the target lane and the corresponding position of the first object is located in the user interface, a first object model is presented at the corresponding position of the first object in the scene, and a dynamic visual effect of the first early warning element from the existence to the nonexistence is presented at the boundary position close to the first object.
Optionally, the display module 701 is further specifically configured to present a turn signal prompt element of a light-emitting effect at a head position in the scene, and present a first detection prompt element pointing to the first object model and being a detection waveform in the detection region of the vehicle model.
Optionally, the display module 701 is further specifically configured to present an arrow indication element pointing to the switching direction at the front end position of the vehicle head corresponding to the first placeholder element or the second placeholder element.
Optionally, the display module 701 is specifically configured to, when it is determined that the vehicle deviates from a lane line according to the driving state of the vehicle, present a second linear prompt element at a position corresponding to the lane line in the scene.
Optionally, the display module 701 is further specifically configured to, in a case that the vehicle deviates from a lane line and the vehicle is in an automatic driving mode, correct a driving track of the vehicle to ensure that the vehicle does not deviate from the lane line; wherein the second linear prompt element is specifically a wall prompt element.
Optionally, the display module 701 is specifically configured to determine, according to the driving state of the vehicle, that a second object exists in a blind area of the vehicle, present a second object model at a position corresponding to the blind area in the scene, and synthesize a second detection prompt element presenting a detection waveform at a detection region position of the vehicle model.
Optionally, the display module 701 is specifically configured to, according to a driving state of the vehicle, present a target object model in a corresponding display state at a position corresponding to a target object in the scene when the target object exists at a position right ahead, a position sideways ahead, a position straight behind, or a position sideways behind the vehicle; wherein, the display states corresponding to different positions are different.
Optionally, the display module 701 is further specifically configured to present, according to a distance between the vehicle and the target object, a third warning element at a position close to an upper boundary of the user interface in the scene, and present at least one of a text prompt element in the user interface.
Optionally, the display module 701 is specifically configured to determine, according to a driving state of the vehicle, that the vehicle switches to an automatic driving mode, present a wall-shaped prompt element at a head position in the scene, present a dynamic increase effect of changing the wall-shaped prompt information from a first size to a second size, and present a dynamic change effect of changing from a predetermined color to a transparent color after increasing to the second size; wherein the second size is greater than the first size.
Optionally, the apparatus further comprises: the determining module is used for detecting a control event aiming at the vehicle and determining corresponding target mirror operation parameters; and switching the scene and the vehicle model into a display state corresponding to the target mirror operating parameter.
Optionally, the determining module is specifically configured to present, on the user interface, display content corresponding to a transition process of switching from the current mirror operation parameter to the target mirror operation parameter; the display content includes a dynamically changing image of the scene and a dynamically changing image of the vehicle model.
Optionally, the control event comprises: detecting a switch of the vehicle from a first driving mode to a second driving mode; detecting a switch of the vehicle from a first vehicle state to a second vehicle state; detecting that the vehicle performs a target driving operation; detecting a control operation for the vehicle itself; detecting that the vehicle is in a target road condition environment; detecting that the vehicle meets corresponding early warning conditions, and determining second mirror moving parameters corresponding to the early warning conditions; and detecting a user adjustment operation for the current mirror moving parameter.
Optionally, the user interface includes a first display area and a second display area, and the display module 701 is specifically configured to display the vehicle model and the scene in the first display area of the user interface; the method further comprises the following steps: and displaying the second display area as a corresponding target color according to the driving state, and/or displaying a text prompt element corresponding to the driving state in the second display area.
The display apparatus shown in fig. 7 can perform the display method shown in the embodiment shown in fig. 1, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, an electronic device is further provided in the embodiments of the present application, as shown in fig. 8, the electronic device may include a storage component 801 and a processing component 802;
the storage component 801 stores one or more computer instructions for execution by the processing component 802 to implement the display method shown in fig. 1.
The processing component 802 may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 801 is configured to store various types of data to support operations at the terminal. The storage component may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display component 803 may be used to output a display interface, such as an Electroluminescent (EL) element, a liquid crystal display or a microdisplay with similar structure, or a laser scanning display where the retina can be directly displayed or similar, or an Augmented Reality (AR), virtual Reality (VR) or Mixed Reality (MR) display device.
Of course, a computing device may also necessarily include other components, such as input/output interfaces, communication components, and so forth.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communications component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by a cloud computing platform, and the computing device may be a cloud server, and the processing component, the storage component, and the like may be a basic server resource rented or purchased from the cloud computing platform.
Fig. 9 shows a schematic structural diagram of a vehicle provided by the present application, as shown in fig. 8, including a vehicle body and the electronic device shown in fig. 8 disposed in the vehicle body.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method embodiment of fig. 1 described above when executed.
Accordingly, the present application further provides a computer program product, and when the computer program product is executed, the steps in the method embodiment of fig. 1 can be implemented.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A display method, comprising:
displaying a vehicle model and a scene on a user interface, wherein the vehicle model is obtained by rendering based on the appearance of a vehicle, and the scene is obtained by rendering based on the environment information where the vehicle is located;
and presenting one or more visual elements at corresponding positions of the scene according to the driving state of the vehicle, wherein the visual elements comprise one or more elements of navigation elements, auxiliary virtual elements, auxiliary driving guide elements, road surface moving units and accompanying effects and symbolic elements.
2. The method of claim 1, wherein presenting one or more visual elements at respective locations of the scene in accordance with a driving state of the vehicle comprises:
determining that the vehicle is in a lane switching mode according to the driving state of the vehicle, and presenting corresponding switching guide elements at corresponding positions in the scene, including:
determining that the vehicle performs a lane switching operation according to a driving state of the vehicle, and presenting a first switching guide element pointing to a target lane at a position ahead of the vehicle model in the scene in the case that a distance between a vehicle driving position and the switching end point is less than a first predetermined distance;
presenting a second switching guide element pointing to a target lane at a position ahead of the vehicle model in the scene within a second predetermined distance of or a predetermined time after the vehicle travel position reaches the switching end point and exceeds the switching end point.
3. The method of claim 2, further comprising:
presenting corresponding early warning prompt elements in the scene by combining at least one judgment result of whether a first object exists in the target lane, whether the target lane is abnormal and whether the lane line is a solid line, wherein the judgment result comprises the following steps:
if a first object exists in the target lane, presenting a first object model at a position corresponding to the first object in the scene, and presenting a first placeholder element at a switching position corresponding to the target lane;
if the first object does not exist in the target lane, presenting a second placeholder element at a switching position corresponding to the target lane in the scene; the display state of the first placeholder element is different from the display state of the second placeholder element;
if the target lane is abnormal, presenting an abnormal prompt element at a corresponding position of the target lane;
and if the lane line is a solid line, presenting a first linear prompt element at the position corresponding to the lane line.
4. The method of claim 3, wherein the presenting a first object model at a location in the scene corresponding to a first object if the first object is present in the target lane comprises:
if the corresponding position of the first object is located outside the user interface, presenting a first early warning element at the boundary position close to the first object in the scene;
if the first object exists in the target lane and the corresponding position of the first object is located in the user interface, a first object model is presented at the corresponding position of the first object in the scene, and a dynamic visual effect of the first early warning element from the existence to the nonexistence is presented at the boundary position close to the first object.
5. The method of claim 1, wherein presenting a plurality of visual elements at respective locations of the scene in accordance with the driving state of the vehicle comprises any one of:
according to the running state of the vehicle, under the condition that the vehicle deviates from a lane line, presenting a second linear prompt element at a position corresponding to the lane line in the scene;
according to the running state of the vehicle, determining that a second object exists in a blind area of the vehicle, presenting a second object model at a position corresponding to the blind area in the scene, and synthesizing a second detection prompt element presenting a detection waveform at a detection area position of the vehicle model;
according to the running state of the vehicle, under the condition that a target object exists at a position right in front of the vehicle, a position laterally in front of the vehicle, a position right behind the vehicle or a position laterally behind the vehicle, a target object model in a corresponding display state is presented at a position corresponding to the target object in the scene; wherein, the display states corresponding to different positions are different;
and/or
Determining that the vehicle is switched to an automatic driving mode according to the driving state of the vehicle, presenting a wall-shaped prompt element at a head position in the scene, presenting the wall-shaped prompt information with a dynamic increasing effect of changing from a first size to a second size, and presenting a dynamic changing effect of changing from a preset color to a transparent color after increasing to the second size; wherein the second size is greater than the first size.
6. The method according to any one of claims 1-5, further comprising:
detecting a control event aiming at the vehicle, and determining corresponding target mirror moving parameters;
and switching the scene and the vehicle model into a display state corresponding to the target mirror operating parameter.
7. The method of claim 6, wherein switching the scene and the vehicle model from a current mirror motion parameter to a display state corresponding to the target mirror motion parameter comprises:
displaying display content corresponding to a transition process of switching from the current mirror moving parameter to the target mirror moving parameter on the user interface; the display content includes a dynamically changing image of the scene and a dynamically changing image of the vehicle model.
8. The method of any of claims 6 or 7, wherein the control event comprises:
detecting a switch of the vehicle from a first driving mode to a second driving mode;
detecting a switch of the vehicle from a first vehicle state to a second vehicle state;
detecting that the vehicle performs a target driving operation;
detecting a control operation for the vehicle itself;
detecting that the vehicle is in a target road condition environment;
detecting that the vehicle meets corresponding early warning conditions, and determining second mirror moving parameters corresponding to the early warning conditions;
and
and detecting a user adjustment operation aiming at the current mirror moving parameter.
9. The method of any of claims 1-8, wherein the user interface comprises a first display area and a second display area;
the displaying the vehicle model and the scene on the user interface comprises:
displaying a vehicle model and a scene in a first display area of the user interface;
the method further comprises the following steps:
and displaying the second display area as a corresponding target color according to the driving state, and/or displaying a text prompt element corresponding to the driving state in the second display area.
10. An electronic device comprising a storage component, a display component, and a processing component; the storage component stores one or more computer instructions for invocation and execution by the processing component to implement the display method of any of claims 1 to 9.
11. A vehicle characterized by comprising a vehicle body and the electronic apparatus of claim 10 provided in the vehicle body.
CN202211321463.9A 2022-10-26 2022-10-26 Display method, electronic device and vehicle Pending CN115904156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211321463.9A CN115904156A (en) 2022-10-26 2022-10-26 Display method, electronic device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211321463.9A CN115904156A (en) 2022-10-26 2022-10-26 Display method, electronic device and vehicle

Publications (1)

Publication Number Publication Date
CN115904156A true CN115904156A (en) 2023-04-04

Family

ID=86475125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211321463.9A Pending CN115904156A (en) 2022-10-26 2022-10-26 Display method, electronic device and vehicle

Country Status (1)

Country Link
CN (1) CN115904156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116608879A (en) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 Information display method, apparatus, storage medium, and program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116608879A (en) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 Information display method, apparatus, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN109484299B (en) Method, apparatus, and storage medium for controlling display of augmented reality display apparatus
US10215583B2 (en) Multi-level navigation monitoring and control
CN104515531B (en) 3- dimension (3-D) navigation system and method for enhancing
CN107206934B (en) Imaing projector
JP6149824B2 (en) In-vehicle device, control method for in-vehicle device, and control program for in-vehicle device
JP6642972B2 (en) Vehicle image display system and method
US20140362195A1 (en) Enhanced 3-dimensional (3-d) navigation
US20160054563A9 (en) 3-dimensional (3-d) navigation
CN107176165A (en) The control device of vehicle
CN104512336A (en) 3-dimensional (3-D) navigation
CN108216023A (en) Vehicle attention alarm set and attention based reminding method
JP2006501443A (en) Method and apparatus for displaying navigation information on a vehicle
US11364842B2 (en) Notification device
JP7017154B2 (en) Display control device and display control program
US20210323540A1 (en) Vehicle driving and monitoring system, vehicle including the vehicle driving and monitoring system, method for maintaining a situational awareness at a sufficient level, and computer readable medium for implementing the method
US20230221569A1 (en) Virtual image display device and display system
CN115904156A (en) Display method, electronic device and vehicle
JP2019027996A (en) Display method for vehicle and display device for vehicle
US11643008B2 (en) Display device and display method for display device
CN113401056A (en) Display control device, display control method, and computer-readable storage medium
JP7127565B2 (en) Display control device and display control program
CN115857772A (en) Display method and electronic equipment
JPWO2018198746A1 (en) Vehicle control device
JP2022043996A (en) Display control device and display control program
CN114867992A (en) Method and apparatus for presenting virtual navigation elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination