CN115848271A - Parking display method, vehicle terminal, vehicle and product - Google Patents

Parking display method, vehicle terminal, vehicle and product Download PDF

Info

Publication number
CN115848271A
CN115848271A CN202211321625.9A CN202211321625A CN115848271A CN 115848271 A CN115848271 A CN 115848271A CN 202211321625 A CN202211321625 A CN 202211321625A CN 115848271 A CN115848271 A CN 115848271A
Authority
CN
China
Prior art keywords
vehicle
user
vehicle model
display
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211321625.9A
Other languages
Chinese (zh)
Inventor
李青
王睿
侯珩
瞿洲
耿硕晨
王璐
宁晓雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202211321625.9A priority Critical patent/CN115848271A/en
Publication of CN115848271A publication Critical patent/CN115848271A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The embodiment of the application provides a parking display method, a vehicle terminal, a vehicle and a product. When the target vehicle is in a parking state, rendering environmental information around the target vehicle and a vehicle model into a display picture; highlighting the controllable components of the target vehicle visible within the preset viewing angle range in the vehicle model in the form of interactive elements; and responding to the triggering operation of the interactive element, and moving from the preset view angle to the direction of the controllable component corresponding to the triggered interactive element to the first view angle so as to display the triggered controllable component and the environmental information contained in the first view angle. And rendering the vehicle model and the surrounding environment information into a display picture, so that a user can know the conditions of a target vehicle and the conditions outside the vehicle in the cockpit in time, and when the user triggers an interactive element, a controllable component corresponding to the interactive element is the current attention content of the user. The preset visual angle range is moved towards the direction of the controllable component, so that a user can intuitively know the visual angle adjusting process, and better watching experience is obtained.

Description

Parking display method, vehicle terminal, vehicle and product
Technical Field
The application relates to the technical field of vehicle control, in particular to a parking display method, a vehicle terminal, a vehicle and a product.
Background
As vehicle technology has developed, various terminals, such as speakers, lighting, displays, and the like, are provided in vehicles.
The display terminal is an important terminal in the vehicle-mounted terminal for meeting the human-computer interaction requirement. In the prior art, vehicle information which can be displayed by a display terminal for man-machine interaction is single. Particularly in the parking state, the vehicle static state-related information is mainly displayed without paying attention to the vehicle surrounding environment information. Even though some vehicles are equipped with a display device capable of displaying images of the surroundings of the vehicle, the displayed content is relatively fixed and the user's sense of participation is low. If the screen content is adjusted, the user jumps from one screen to another screen, and the user who is not familiar with the vehicle or the surrounding environment of the vehicle may not know the positional relationship between the content in the newly jumped screen and the vehicle.
Disclosure of Invention
The embodiment of the application provides a parking display method, a vehicle terminal, a vehicle and a product, and aims to improve a scheme of interactive experience during man-machine interaction through display contents in a parking state.
In a first aspect, an embodiment of the present application provides a parking display method, including:
when a target vehicle is in a parking state, rendering environmental information around the target vehicle and a vehicle model into a display picture;
highlighting in the vehicle model, in the form of interactive elements, controllable components of the target vehicle that are visible within the preset viewing angle range;
and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle.
The vehicle model and the surrounding environment information are rendered into a display picture at the same time, when a triggering demand exists on a controllable component in the vehicle model, the visual angle range can be adjusted, in the adjusting process, the lens is controlled to move from a preset visual angle to a first visual angle where the controllable component is located, and then the corresponding controllable component on the vehicle model and the surrounding visible environment information are displayed through the first visual angle. When the visual angle is adjusted, the mode of continuously moving the mirror is adopted to move from the preset visual angle to the first visual angle, so that the continuous effect of watching by a user can be effectively improved.
Optionally, rendering the environmental information around the target vehicle and the vehicle model into a display screen includes:
and if the vehicle model is shielded by the environment object contained in the environment information within the preset visual angle range, displaying the environment object in a semitransparent state.
Because the environment object adjacent to the target vehicle in the environment can be rendered in the display picture, the environment object can shield the vehicle model under some visual angles. Therefore, in order to enable the user to view the vehicle model more clearly, the environmental object may be set to a translucent state so that the user can perceive the presence of the environmental object while also being able to see the vehicle model clearly.
Optionally, the method further comprises:
adjusting the preset visual angle in response to a rotation, dragging or zooming operation of the vehicle model or in response to attention information detected by a vehicle;
after the preset visual angle is adjusted, highlighting the interactive elements corresponding to the controllable components moved into the preset visual angle range;
and after the preset visual angle is adjusted, the interactive elements corresponding to the controllable components which are moved out of the preset visual angle range disappear.
The attention information may be information that requires attention of a user, such as a vehicle or a pedestrian approaching. When the vehicle model is adjusted, the controllable components can be seen differently according to different viewing angles, so that the display states of the corresponding interactive elements are correspondingly adjusted, and the watching and control requirements of a user are met.
Optionally, the method further comprises: adjusting the position or angle of the vehicle model in a display screen in response to a moving operation or a rotating operation on the vehicle model;
the interactive element disappears during the position or angle adjustment of the vehicle model in the display.
Optionally, the method further comprises: if the triggering operation is not received within the time threshold range, adjusting the visual angle of the position or the angle of the vehicle model in the display picture to be a preset visual angle range;
and displaying the interactive elements.
When the user adjusts the vehicle model and there is no use requirement, the vehicle model needs to be automatically restored to the default state (the preset view angle state). Specifically, a time threshold may be set, and if no trigger operation is received after the time threshold is exceeded, the position and/or the angle of the vehicle model in the display screen may be adjusted to a preset viewing angle. And displaying the interactive elements after the adjustment. So that the user can find the desired vehicle model and the interactive elements of the controllable components the next time.
Optionally, the method further comprises: when the interactive element corresponding to the controllable component is triggered, the vehicle model is displayed in an enlarged mode; the controllable component includes: at least one of a vehicle door, a tail door and a charging opening cover;
when the controllable component is switched on or switched off, the corresponding interactive element is displayed in a breathing light state.
In order to enable the user to more intuitively know the working state of the controllable component, the corresponding interactive element can be displayed in the breathing lamp state during the action of the controllable component. And when the display is in the opening state or the closing state, the breathing lamp state is not adopted for displaying. Therefore, the user can conveniently and visually know the working process, and the display effect is improved.
Optionally, the method further comprises: displaying a central control lock control, a tail gate control and a charging port control in the display picture;
when the central control lock control is triggered, the central control lock control central lock model is switched to a miniature car model;
and dynamically displaying the outer frame of the central control lock.
For the convenience of user operation, the controllable components can be controlled through the interactive elements, and the controllable components can also be correspondingly controlled through various controls. In order to enable a user to know opening and closing states more intuitively, the opening and closing states can be displayed through the miniature car model in the action executing process, for example, when a car door is opened, the car door of the miniature car model is in the opening state, and when the car door is closed, the miniature car model is in the closing state. In addition, in the execution process, the outer frame of the central control lock can also be dynamically displayed. Thereby obtaining better prompt effect.
Optionally, the method further comprises:
and in response to the parking-out request aiming at the target vehicle, pulling up the shooting lens in the preset visual angle range, and displaying an adjusted visual angle range containing a parking-out path.
It is easy to understand that when the target vehicle has a driving-out demand, the route information can be provided for the user in order to facilitate the user to know the driving environment, that is, the parking route and the vehicle model are displayed together in the display screen. In the adjusting process, the virtual lens is pulled up from the preset visual angle, so that a user can comprehensively know the relative position relation between the target vehicle and the parking path along with the adjustment of the lens, and the user can obtain a coherent viewing effect.
Optionally, the method further comprises: adjusting the preset visual angle to a depression angle, and displaying a sensor sensing area of the vehicle model; the sensor sensing area is displayed at a position corresponding to a sensor of the target vehicle;
and displaying a multi-level sensing area according to the sensitivity.
By displaying the sensor induction area around the vehicle model and displaying in multiple levels according to the sensitivity, a user can more intuitively know the voice control state and the effective area.
Optionally, the method further comprises: when the charging gun is plugged in and pulled out, the vehicle model is taken as a visual angle center, and the vehicle model is adjusted to be a charging port visible visual angle from the preset visual angle; and the charging port displays dynamically.
In the charging operation process, in order to facilitate the user to see the charging state in time, the default preset visual angle is adjusted to be the visual angle with the charging port as the display center, and the charging port is dynamically displayed. The mode of continuously moving the mirror is adopted in the process of adjusting the preset visual angle to the charging port visible visual angle, so that the user can obtain continuous watching experience.
Optionally, the method further comprises: and displaying the vehicle model in the display picture in a drifting manner within a preset visual angle range.
Under the condition of no operation, the vehicle model adopts a dynamic drift display mode, so that the visual effect can be improved, and meanwhile, a user can conveniently distinguish whether the vehicle model is stuck or not and whether the vehicle model is effectively responded or not.
Optionally, the method further comprises: when the vehicle state of the target vehicle changes, moving from a preset view angle to the direction of the vehicle state change position to a first view angle; the vehicle state change includes: at least one of a tire pressure state, a tire temperature state, a vehicle door prompting state, a light show state and a trailer state;
and displaying the changed vehicle state and/or the corresponding environment information.
When the state of a certain component in the target vehicle changes or is abnormal, in order to enable a user to timely and intuitively know the abnormal state, the view angle range can be adjusted in a continuous mirror moving mode, and therefore the user can comprehensively and timely know the content needing to be paid attention to.
Optionally, when a door opening instruction of a user is received, determining that a vehicle door is to be opened according to the detected position relationship between at least one passenger and the target vehicle;
when the occupant interferes with the opening of the vehicle door, the opening track of the vehicle door is displayed, and the interference position of the vehicle door and the target occupant is highlighted.
When the vehicle door can be controlled by a vehicle owner to open so that a passenger can get on or off the vehicle, the vehicle door can occupy the current space of the passenger or collide with the passenger after being opened due to the fact that a section of space needs to be extended outwards in the opening process of the vehicle door. Therefore, in order to avoid collision with the passenger during the opening process of the vehicle door, the relative position relationship between the passenger and the vehicle door can be displayed, and the opening track of the vehicle door can be rendered. When the collision is found to be possible, the driver can be prompted to continue to drive forwards or drive the vehicle outwards, so that the vehicle door can be normally opened, and passengers can get on or off the vehicle conveniently.
Optionally, the method further comprises: generating a function description scene based on the vehicle model in response to a triggering instruction for starting a vehicle function of the target vehicle;
and displaying the response state of the vehicle model in the function description scene and/or the corresponding environment information after the vehicle function is started or closed.
When the vehicle function is adjusted, the vehicle function is not limited to the text description, and the vehicle function is displayed through the function description scene, so that a new user and an old user can fully know the effect after the function is adjusted. In addition, the safety of vehicles and personnel in the adjusting process can be ensured because the adjusting process is in a parking state.
In a second aspect, an embodiment of the present application provides a vehicle terminal, including: a memory, a processor;
the memory to store one or more computer instructions;
the processor is configured to execute the one or more computer instructions for performing the steps in the method of any of the first aspects.
In a third aspect, the present application provides a vehicle, which is characterized in that the vehicle is provided with the vehicle terminal as described in the second aspect.
In a fourth aspect, the present application provides a computer program product, which when executed is capable of implementing the steps in the method of the first aspect.
According to the parking display method, the vehicle terminal, the vehicle and the product, when a target vehicle is in a parking state, environment information around the target vehicle and a vehicle model are rendered into a display picture; highlighting in the vehicle model, in the form of interactive elements, controllable components of the target vehicle that are visible within the preset viewing angle range; and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle. By the scheme, the vehicle model and the surrounding environment information are rendered in the display picture, so that a user can timely know the target vehicle and the conditions outside the vehicle in the cabin. Furthermore, controllable components are displayed on the vehicle model, which are controllable via the interaction elements associated with the vehicle model. When the user triggers the interactive element, the controllable component corresponding to the interactive element is the current attention content of the user. Therefore, the preset visual angle range moves towards the direction of the controllable component, and a user can intuitively know the visual angle adjusting process to obtain better watching experience.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a vehicle terminal illustrated in the present application;
FIG. 2 is a schematic flow chart of a parking display method provided by an embodiment of the application;
FIG. 3 is a schematic diagram of a vehicle model and environmental objects illustrated in an embodiment of the present application;
FIG. 4 is a schematic view of a sensor sensing area illustrated in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a change in vehicle state according to an embodiment of the present application;
FIG. 6 is a schematic diagram of the function opening and closing illustrated in the present application;
fig. 7 is a schematic structural diagram of a parking display device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the specification, claims, and above-described figures of the present invention, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
In the prior art, in a parking state of a vehicle, a user can complete corresponding interactive operation through a human-computer interaction terminal. However, the interactive operation that the human-computer interactive terminal can provide in the general parking state is an operation related to a vehicle static function. Moreover, in the interaction process, the displayed content has obvious independence, that is, the continuity between different pictures is poor, and the user interaction experience is poor. Further, at the time of setting the vehicle-related function, the vehicle-function-related parameter is generally displayed in the form of a parameter table. The perception ability of the user to the parameters is weak, and the user cannot directly experience the change of the vehicle function, the change of the strength of certain function and the like caused by the parameter adjustment. Therefore, a solution for improving the human-computer interaction experience in the parking state is needed.
The embodiment of the application provides a vehicle terminal. Fig. 1 is a schematic structural diagram of a vehicle terminal illustrated in the present application. As can be seen from fig. 1, the terminal includes: controller 11, sensor 12, display 13, memory 14, etc. The sensor may be used to detect own information of the target vehicle (e.g., coordinates of the vehicle, a vehicle direction, a tire pressure, a charging state, a door opening and closing state, etc.), a lane marking, vehicle body surrounding environment information acquisition and recognition, and the like.
The display device described here may be a head-up display (HUD), a projection display, a liquid crystal display, an LED display, an AR (Augmented Reality) display, a VR (Virtual Reality) display, or the like. The display may be mounted in a main cab dashboard position, center console position, secondary cab position, rear seat, roof, etc., may be a rectangular display extending from the main cab to the secondary cab, or may be a wearable display device.
And simultaneously rendering the vehicle model of the target vehicle and the environmental information around the target vehicle detected by the sensor in a display picture presented by the display and presenting the vehicle model and the environmental information to a user. The rendered environment information includes a parking space where the target vehicle is located, a vehicle state of an adjacent parking space (if the adjacent parking space has a vehicle, a vehicle model is rendered, and if the adjacent parking space does not have a vehicle, an empty parking space is displayed), an object adjacent to the target vehicle (for example, a wall body of an underground garage, a garbage can and the like), and the like. Although the vehicle is in a static (or temporary static) state, the user can still comprehensively know the state of the vehicle and the state information of the environment around the target vehicle in time through the rendered content.
When a user (an occupant sitting in a car) wants to learn more about the information, it can do so through a display interaction process. Specifically, in a default state, the vehicle model and the environmental information are displayed through a preset viewing angle. If the user operates the vehicle model by dragging, rotating, or the like, the preset viewing angle is adjusted so that the content focused on by the user can be displayed. If the vehicle state changes (for example, the tire pressure is lower than the preset tire pressure, the door is opened, and the vehicle is charged), the preset viewing angle is adjusted, so that the part vehicle model with the vehicle state change is highlighted. If the environmental information changes, for example, a vehicle approaches, rains suddenly, or the like, the preset viewing angle is adjusted accordingly, so that the changed environmental information and the influence of the changed environmental information on the target vehicle and passengers on the vehicle can be highlighted while the vehicle model is displayed.
The parking display method will be explained below by way of specific embodiments. Fig. 2 is a schematic flowchart of a parking display method according to an embodiment of the present application. As can be seen from fig. 2, the method specifically includes the following steps:
step 201: when the target vehicle is in a parking state, rendering environment information around the target vehicle and a vehicle model into a display screen.
Step 202: highlighting in the vehicle model the controllable components of the target vehicle visible within the preset viewing angle range in the form of interactive elements.
Step 203: and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle.
In the parking state, if the target vehicle senses that the passenger is still in the cabin, the vehicle is electrified and a display picture is presented to the passenger through the display. If there is no occupant in the cabin of the target vehicle, the display will be off.
The following embodiments will be described taking as an example that there are passengers in the cabin and the display interface can normally display. When the parking state can be displayed normally in the display interface, the environmental information around the target vehicle in the parking state is collected and processed, and not only the vehicle model corresponding to the target vehicle is rendered in the display interface, but also the collected environmental information is rendered in the display interface.
In the vehicle model, controllable components of the target vehicle, such as doors, trunk, charging ports, sunroof, lights, etc., can be seen. The controllable elements are controllable by a user via the interactive elements corresponding to the controllable elements. These interactive elements are highlighted relative to the vehicle model. For example, the highlighting may be in the form of a breathing light, or a highlight, or a highlighting of a corresponding portion of the controllable element on the vehicle model (e.g., a door handle highlight). It should be noted that the interactive element is variably adjustable according to the environmental color and the car body color, so that the interactive element forms a strong contrast with the environmental color and the car body color, and thus a user can more easily see the interactive element. For example, when the interactive element is highlighted in the form of a highlight or a breathing light, the color of the highlight or breathing light may be variably adjusted to provide a strong contrast between the ambient color and the color of the vehicle body.
In practical applications, when an interactive element is triggered, it indicates that a user pays more attention to the interactive element, and the interactive element is moved from a preset view angle to a first view angle capable of displaying the interactive element and a corresponding controllable component, so that the user can more comprehensively know a process of triggering the controllable component and a state after the controllable component is triggered. When moving to first visual angle from predetermineeing the visual angle, virtual camera moves according to predetermineeing the removal orbit, continuously shows and moves in-process vehicle model and environmental information to make the user can more audio-visual understanding vehicle model and surrounding environment information, and predetermine the spatial position relation of visual angle and first visual angle, be different from the mode that the picture jumped, make the user can obtain better sense of immersing.
In one or more embodiments of the present application, rendering environment information around the target vehicle and a vehicle model into a display screen includes: and if the environmental object contained in the environmental information covers at least part of the vehicle model within the preset visual angle range, displaying the environmental object in a semitransparent state.
Fig. 3 is a schematic diagram of a vehicle model and an environmental object according to an embodiment of the present application. As can be seen from fig. 3, when the target vehicle is in the parked state, the environmental information around the vehicle is collected. The environment information is rendered and presented in the display interface. The environmental information may include information such as environmental objects and environmental weather. Wherein the environmental object is visible to the user if rendered into the display interface. It is easy to understand that, because the viewing angles are different, the rendering effects that the corresponding users can see are different, for example, some viewing angles users can directly see the vehicle model, and some viewing angles can cause the environmental object to block the vehicle model. If the current user attention point is the vehicle model, the environmental object is processed, so that the environmental object which is shielded by the vehicle model is in a semitransparent state. So that the user can directly know the relative positional relationship between the vehicle model and the environmental object and the state information of the current vehicle model.
If the state of the part of the vehicle model shielded by the environmental object changes, or the user touches the shielded part of the vehicle model, the environmental object is rendered into a fully transparent state, or the transparency of the environmental object is improved (so that the user can see the shielded part of the vehicle model more clearly).
It should be noted that, if the environmental object is subjected to the full transparency or semi-transparency processing, and still a part of the vehicle model focused by the user or a part of the vehicle model triggered by the user cannot be clearly seen, the range and angle of the preset viewing angle may be adjusted, so that the user can clearly see the focused content.
By the method, the vehicle model corresponding to the target vehicle and the environmental information of the surrounding environment where the target vehicle is located are rendered, so that a user can know the environmental object and can see the vehicle model clearly. The environmental object referred to herein may be various objects visible to the user, such as buildings, vehicles, living things, and the like.
In one or more embodiments of the present application, further comprising: adjusting the preset visual angle in response to a rotation, dragging or zooming operation on the vehicle model or in response to attention information detected by the target vehicle; after the preset visual angle is adjusted, highlighting the interactive elements corresponding to the controllable components moved into the preset visual angle range; and when the preset visual angle is adjusted, the interactive elements corresponding to the controllable components which are moved out of the preset visual angle range disappear.
In practical applications, a user may perform a touch operation on the vehicle model as needed, for example, a zoom operation (including a zoom-in operation and a zoom-out operation), a rotation operation, a drag operation, and the like. Thereby enabling the user to see the portion of the vehicle model that he or she wants to view or the corresponding controllable component of the vehicle model that he or she wants to operate.
In addition, if the environmental information around the target vehicle is detected to be changed by the vehicle-mounted sensor (e.g., a camera, a radar, etc.), for example, other vehicles approach the target vehicle or a person approaches the vehicle, the preset viewing angle is adjusted to remind the passenger of the vehicle to pay attention in time. And adjusting the corresponding part of the vehicle model needing to attract the attention of the passenger to be in the visual field range.
In addition, if abnormal tire pressure of the vehicle is detected through a vehicle-mounted sensor (such as a tire pressure sensor and a charging state sensor), or abnormal charging needs to remind passengers on the vehicle to pay attention in time, a preset visual angle can be adjusted. And adjusting the corresponding part of the vehicle model needing to attract the attention of the passenger to be in the visual field range.
Based on the above manner, when the preset viewing angle range is adjusted, since the angle, the range, and the size of the displayed vehicle model may change, the size and the angle of the corresponding controllable component visible to the user may also change. If some of the controllable components are not visible to the user, the corresponding interactive elements are hidden. If some of the controllable components are inoperable with respect to the user as a result of being scaled down, the corresponding interactive elements are hidden. Similarly, if some of the controllable elements are visible to the user after the viewing angle adjustment, the corresponding interactive elements are highlighted and may be highlighted or flash when just visible. If a controllable element is enlarged and is visible to the user, the corresponding interactive element is highlighted.
In one or more embodiments of the present application, further comprising: adjusting the position or angle of the vehicle model in a display screen in response to a moving operation or a rotating operation on the vehicle model; the interactive element disappears during the position or angle adjustment of the vehicle model in the display.
In practical applications, a user may perform a touch operation on the vehicle model or other positions in the display interface as needed, for example, a zoom operation (including a zoom-in operation and a zoom-out operation), a rotation operation, a drag operation, and the like. Thereby enabling the user to see the portion of the vehicle model that he or she wants to view or the corresponding controllable component of the vehicle model that he or she wants to operate. In the process of executing the operation instruction of the user, it is difficult to determine whether the user has a touch operation on a certain controllable component, for example, although the user presses a certain interactive element, the user performs operations such as dragging and zooming after pressing the interactive element, instead of a click operation on the interactive element, so that in order to avoid an erroneous response in the process of adjusting the view angle of the model, all the interactive elements are hidden. Therefore, even if the user touches the interactive element in the operation process, the interactive element does not respond. In the adjustment process, if the triggering actions of the user, such as dragging operation, zooming operation, rotating operation and the like, which are not specific to the interactive elements are detected, the interactive elements disappear, and therefore the interactive elements can be effectively prevented from being triggered by the user by mistake.
Further, the method also comprises the following steps: if the triggering operation is not received within the time threshold range, adjusting the visual angle of the position or the angle of the vehicle model in the display picture to be a preset visual angle range; and displaying the interactive elements.
After the user completes the related operations (such as the rotation operation, the dragging operation, and the like), if the user does not perform new related operations, the timing for the operations completed by the user is started, and if the user performs the dragging operations again and the like under the condition that the timing duration is less than the time threshold, the timing is reset and is re-timed; on the contrary, if the timing duration is greater than or equal to the time threshold, the user does not perform any operation such as dragging on the vehicle model, and the vehicle model is adjusted to the preset viewing angle range from the interface displayed after the user operates the vehicle model. And after the visual angle adjustment is finished, the interactive elements visible in the preset visual angle range are restored and displayed.
Optionally, the method further comprises: when the interactive element corresponding to the controllable component is triggered, the vehicle model is displayed in an enlarged mode; the controllable component includes: at least one of a vehicle door, a tail door and a charging opening cover; when the controllable component is switched on or switched off, the corresponding interactive element is displayed in a breathing light state.
In practical application, in the content displayed by the preset visual angle, in order to enable the user to view more useful information at the same time, the size of the vehicle model can be adjusted to be in a moderate state, so that the display interface can display the complete vehicle model, and meanwhile, the environmental information around the vehicle model can be displayed, for example, the state of parking vehicles in parking spaces around the target vehicle, surrounding buildings, the recognized charging piles and the like.
In order to enable the user to have better and accurate triggering control effect through the interactive elements, the vehicle model can be displayed in an amplifying mode when the user triggers the interactive elements, and therefore the vehicle model after amplification can be displayed more clearly. That is, the user can see the amplified controllable component more clearly, and when the user has a trigger requirement, the user can touch the corresponding interactive element more accurately based on the amplified controllable component. It should be noted that the controllable component refers to a component capable of executing a corresponding action in response to a touch operation by a user. Such as at least one of a vehicle door, vehicle tail, charging port cover, vehicle window, and sunroof.
In the process of triggering the interactive element by the user, in order to enable the user to distinguish different working states of the controllable component more obviously, when the controllable component is turned on or turned off, the corresponding interactive element is highlighted, for example, displayed in a breathing lamp state. May also be highlighted, etc. The display effect described herein is merely an example, and is not a limitation on the technical solution of the present application.
In one or more embodiments of the present application, further comprising: displaying a central control lock control, a tail gate control and a charging port control in the display picture; when the central control lock control is triggered, the central control lock control central lock model is switched to a miniature car model; and dynamically displaying the outer frame of the central control lock.
In practical application, a central control lock control, a tail gate control, a charging port control and the like are displayed in a display picture. The central control lock control is used for controlling opening and closing of each vehicle door, the tail door control is used for controlling opening and closing of a tail door of a target vehicle, and the charging port control is used for controlling opening and closing of the charging port cover.
As an optional mode, the controls may be displayed above the display interface where the vehicle model is located, and a user may directly implement control operations for opening and closing a vehicle door, a tailgate, and a charging flap by triggering the controls, without going through excessively cumbersome, multi-level menu selections.
When the central control lock control is triggered, the central control lock control switches the lock model into a miniature car model, and the change of the control state is represented by switching the central control lock control. Furthermore, in order to improve the display effect, the dynamic display (for example, a breathing lamp form and a horse race lamp form) of the outer frame of the central control lock control can be used as an action prompt effect. The miniature car model is used for displaying the opening or closing state of the car door.
In one or more embodiments of the present application, further comprising: and in response to the parking-out request aiming at the target vehicle, pulling up the shooting lens in the preset visual angle range, and displaying an adjusted visual angle range containing a parking-out path.
In practical application, when a vehicle enters or exits a parking space, the vehicle has a corresponding road track, a path that a target vehicle enters the parking space is referred to as a parking path, and a path that the target vehicle exits the parking space is referred to as a parking path. It is easy to understand that when more contents (including vehicle model, parking path) need to be presented in the display interface at the same time, the angle of view of the lens needs to be adjusted, so as to obtain a more comprehensive angle of view range.
In the preset visual angle range, only a vehicle model and a small amount of nearby environment information are usually displayed, a user can see a certain angle of the vehicle model, and in order to enable the user to know the spatial position of a target vehicle in the environment and the relative position relationship between the target vehicle and a parking route, a virtual lens for shooting the vehicle model can be used for carrying out mirror-up movement (the lens can be lifted up according to a preset track when being lifted up), so that the user can visually know the relative position relationship between the vehicle model and the parking track. The lens is pulled up in a mode of continuously moving the mirror instead of switching the lens, so that a user can obtain a continuous viewing effect, and the relative position relation between the vehicle model and the parking track can be accurately known.
In one or more embodiments of the present application, further comprising: adjusting the preset visual angle to a depression angle, and displaying a sensor sensing area of the vehicle model; the sensor sensing area is displayed at a position corresponding to a sensor of the target vehicle; and displaying the multi-level sensing area according to the sensitivity.
Fig. 4 is a schematic diagram of a sensor sensing area according to an embodiment of the present application. As can be seen from fig. 4, sensors (e.g., voice control sensors) are disposed at various positions of the body of the target vehicle for acquiring sounds in the environment, for example, a user wants to open a door, a tailgate, a window, etc. of the target vehicle by voice in the vicinity of the vehicle or in a cabin of the target vehicle, and the user can realize a desired control effect by directly issuing a voice command without manual operation.
If the user carries out speech control outside the car, then receive environmental factor interference easily, for example, the sensor of the local vehicle in some can be clear gather speech information, some places can not gather, consequently, can show sensor induction zone in the display interface, only can just be by perception and discernment at the sound of sensor induction zone within range. When the sensor sensing area is displayed, the position of the sensor is corresponding to the position of the configured sensor.
Further, in order to enable the user to more accurately know the sensitivity corresponding to different relative position distances of the vehicle, the sensor sensing area may be displayed as a multi-level sensing area when being set. Deeper color indicates closer to the vehicle, higher sensitivity; conversely, lighter color indicates farther from the vehicle, the lower the sensitivity. Based on the mode, the passengers in the vehicle can also determine the position of the sound source outside the vehicle based on the sensor sensing area, so that the passengers in the vehicle can accurately know the position relation of the sound source relative to the target vehicle body. For example, when a sound source is detected, the sound source location is rendered into the sensor sensing area and highlighted in the form of a bright spot.
In one or more embodiments of the present application, further comprising: in the process of plugging and unplugging the charging gun, the vehicle model is taken as a visual angle center, and the visual angle is adjusted from the preset visual angle to a charging port visible visual angle; and the charging port displays dynamically.
In practical application, when charging is started or charging is just finished for a target vehicle, a user may pay more attention to the working state of a charging port of the target vehicle, particularly in an automatic charging scene, vehicle personnel do not need to manually plug and pull a charging gun, a driver only needs to drive the vehicle to a specified position, the charging gun senses the charging requirement of the vehicle, a charging port cover of the target vehicle is automatically opened, and meanwhile, the charging gun is automatically inserted into the charging port. At this moment, in order to enable the driver to better, more comprehensively and more timely know the charging state of the vehicle, the charging state can be adjusted to the visible viewing angle of the charging port from the preset viewing angle in a mode of continuously moving the mirror, for example, the charging port can be used as the center of the display interface for displaying. Based on the mode of continuously moving the mirror, the user can clearly know the adjustment process of the visual angle, and meanwhile, the user is guided to be capable of paying attention to the transfer from the vehicle model to the charging port and the charging state.
In one or more embodiments of the present application, further comprising: and displaying the vehicle model in the display picture in a drifting manner within a preset visual angle range.
As an alternative, a vehicle model is displayed in a preset view angle in a display screen, and if the vehicle model is still, a user cannot judge whether the display content of the current display interface is stuck or in a normal state, so that the vehicle model can be set to be displayed in a drift manner within the range of the preset view angle. That is, the virtual lens floats within the preset range, so that the vehicle model moves within the preset range in the display screen.
In one or more embodiments of the present application, further comprising: when the vehicle state of the target vehicle changes, moving from a preset view angle to the direction of the vehicle state change position to a first view angle; the vehicle state change includes: at least one of a tire pressure state, a tire temperature state, a vehicle door prompting state, a light show state and a trailer state; and displaying the changed vehicle state and/or the corresponding environment information. For example, fig. 5 is a schematic diagram illustrating a vehicle state change according to an embodiment of the present application. A light show with flickering light around the vehicle model and the vehicle body can be seen.
The vehicle state change referred to herein is a state change of the vehicle itself, and such a state change requires attention of the driver. Therefore, the vehicle state change direction is moved from the preset visual angle, so that the user can comprehensively know the vehicle state change needing attention according to the continuous movement of the visual angle. The picture displayed by the first viewing angle may be centered on the controllable component of the vehicle state change. The vehicle state change referred to herein includes: at least one of a tire pressure state, a tire temperature state, a vehicle door prompting state, a light show state and a trailer state. Wherein, the tire pressure state includes that tire pressure is too high or low excessively, then can be through the suggestion of tire pressure numerical value, can also be through rendering to tire or wheel hub in the vehicle model to make the tire pressure that takes place unusually more arouse user's attention. The rendering effect corresponding to different tire pressure states is different, and the prompt effect that can be realized is different. Similarly, the tire temperature state can also be rendered by the tire or the hub in the vehicle model, so that the abnormal tire temperature is more likely to attract the attention of the user.
The vehicle door state prompt includes a door opening prompt, a door closing prompt, a vehicle door opening prompt and a vehicle door closing prompt. So that the user can intuitively know the working states of all the controllable components of the vehicle through the vehicle door state prompt information.
The vehicle state change may be a vehicle state change caused by a change in the surrounding environment, for example, when another vehicle travels toward the target vehicle and is parked beside the target vehicle, and normal door opening of the target vehicle is affected, a vehicle door opening trajectory is rendered to prompt the user to notice a collision when the user opens the door. Or the change of the environmental state is displayed in a picture, for example, in rainy and snowy weather, when passengers get on or off the vehicle, the user is reminded to pay attention to the ground humidification when getting on or off the vehicle by rendering the ground around the vehicle. Wherein, the information source of the environment state can be weather forecast information provided by a sensor or a server.
In one or more embodiments of the present application, further comprising: generating a function description scene based on the vehicle model in response to a triggering instruction for starting a vehicle function of the target vehicle; and displaying the response state of the vehicle model in the function description scene and/or the corresponding environment information after the vehicle function is started or closed.
In practical applications, when a user sets (turns on, turns off or adjusts related parameters) a vehicle function of a target vehicle, in the prior art, a prompt is often made through a text (for example, the cruise control function is turned on) or an indicator light. However, for a user familiar with the vehicle function, the influence of operations such as turning on a certain vehicle function, turning off a certain vehicle function, and adjusting a parameter of a certain vehicle function on the vehicle can be understood by simple characters. However, for a user who uses the vehicle function for the first time or uses the target vehicle for the first time, it is difficult to understand the meaning thereof only by means of text description, and therefore, in the present application scheme, a function description scene is generated based on the vehicle model, and the response state of the vehicle model and the relevant environmental information are shown in the function description scene after being turned on or turned off. For example, fig. 6 is a schematic diagram illustrating the turning on and off of the functions described in the present application. And starting the lane changing auxiliary function, so that the vehicle model of the target vehicle and other vehicle models in adjacent lanes can be seen in the function description scene, and the prompt function of the function in the lane changing process of the vehicle is demonstrated.
In practical applications, if an environmental object included in the environmental information has spatial interference with the target vehicle or the controllable component, the center of the screen is moved to a position including the environmental object along a target track, and the environmental object having spatial interference with the target vehicle is rendered in the screen. Therefore, the user can intuitively know the position relation between the environmental object and the vehicle and the position of the environmental object and the target vehicle, which may interfere with each other, so that the user can be guided to avoid collision or scratch between the target vehicle and the environmental object.
In practical application, when a passenger is outside the target vehicle and is sensed, the presenting view angle is adjusted to be a depression angle, and the relative position relation of the passenger in a sensing area of the target vehicle is shown. Alternatively, the passenger can unlock the vehicle and open the door in the sensing area by voice commands (for example, opening the door) outside the vehicle, thereby freeing the two hands of the user.
When the target vehicle is in a parking state and a sensor detects that a passenger wants to get on the vehicle, the target vehicle door is adjusted into a picture and highlighted according to the relative position relation between the passenger and the target vehicle; highlighting an open control on a target door on the target vehicle model; and/or, after the target vehicle door is determined to be the vehicle door to be controlled, highlighting the unlocking control.
It is readily understood that there may be a need for passengers to get on or off the vehicle when the vehicle is at rest. When the sensor detects that the passenger approaches the vehicle, the relative position relation between the passenger and the vehicle is marked on the display screen, and the passenger and a target vehicle door corresponding to the passenger can be displayed in a highlighted mode on the screen from a preset visual angle. Further, in order to facilitate the vehicle owner to open the vehicle door for the passenger, after the target vehicle door corresponding to the passenger is determined, an unlocking control for opening the target vehicle door can be displayed. Or when the passenger getting on or off the vehicle is detected in the target vehicle door, only the target vehicle door is unlocked and automatically opened or closed after the vehicle owner clicks the unlocking control, and other vehicle doors do not respond.
In practical application, when a door opening instruction of a user is received, determining that a vehicle door is to be opened according to the detected position relation between at least one passenger and a target vehicle; when the occupant interferes with the opening of the vehicle door, the opening track of the vehicle door is displayed, and the interference position of the vehicle door and the target occupant is highlighted.
Specifically, when the vehicle door can be controlled by the vehicle owner to open for the passenger to get on or off the vehicle, the vehicle door may occupy the current space of the passenger or collide with the passenger after being opened due to the need of extending a section of space outwards during the opening process of the vehicle door. Therefore, in order to avoid collision with the passenger during the opening process of the vehicle door, the relative position relationship between the passenger and the vehicle door can be displayed, and the opening track of the vehicle door can be rendered. When the collision is found to be possible, the driver can be prompted to continue to drive forwards or drive the vehicle outwards, so that the vehicle door can be normally opened, and passengers can get on or off the vehicle conveniently.
In practical application, according to the relative position relation of the environment object and the target vehicle, the opening track of the vehicle door is displayed, and the interference position of the vehicle door and the environment object is highlighted. So as to prompt passengers to pay attention to surrounding objects when opening the door, or to limit the opening size of the door.
In practical application, when a vehicle arrives at a destination and is changed into a parking state, the ground state around the vehicle is judged according to the received and sensed environmental information; and determining the rendering effect of the ground around the vehicle according to the sensing judgment result.
For example, the current environment is known to be a rain and snow environment through a rainfall sensor or weather forecast information. After the vehicle is in the parking state, the passenger can get on or off the vehicle, and the rendering of the ground can prompt the user to carefully wet and slide the ground to avoid slipping.
In practical application, when the change information appears in the surrounding environment, the target vehicle is used as a picture reference point, and the angle of the presenting view (or the virtual camera position) is adjusted so as to display the change information in the picture.
Based on the same idea, an embodiment of the present application further provides a parking display device, and fig. 7 is a schematic structural diagram of the parking display device provided in the embodiment of the present application. As can be seen from fig. 7, the following are specifically included:
the rendering module 71 is configured to render the environment information around the target vehicle and the vehicle model into a display screen when the target vehicle is in a parking state.
A display module 72 for highlighting the controllable components of the target vehicle visible within the preset viewing angle range in the vehicle model in the form of interactive elements.
And a view angle moving module 73, configured to, in response to a triggering operation on the interactive element, move from the preset view angle to a first view angle in a direction in which the controllable component corresponding to the triggered interactive element is located, so as to show the triggered controllable component and environment information included in the first view angle.
Optionally, the display module 72 is configured to display the environmental object in a semi-transparent state if the environmental object included in the environmental information blocks the vehicle model within the preset viewing angle range.
Optionally, the display module 72 is further configured to adjust the preset viewing angle in response to a rotation, dragging, or zooming operation on the vehicle model, or in response to attention information detected by the vehicle;
after the preset visual angle is adjusted, highlighting the interactive elements corresponding to the controllable components moved into the preset visual angle range;
and after the preset visual angle is adjusted, the interactive elements corresponding to the controllable components which are moved out of the preset visual angle range disappear.
Optionally, the display module 72 is further configured to adjust a position or an angle of the vehicle model in the display screen in response to a moving operation or a rotating operation on the vehicle model;
the interactive element disappears during the position or angle adjustment of the vehicle model in the display.
Optionally, the display module 72 is further configured to adjust the viewing angle of the position or the angle of the vehicle model in the display screen to a preset viewing angle range if the triggering operation is not received within the time threshold range;
and displaying the interactive elements.
Optionally, the display module 72 is further configured to display the vehicle model in an enlarged manner when the interactive element corresponding to the controllable component is triggered; the controllable component includes: at least one of a vehicle door, a tail door and a charging opening cover;
when the controllable component is switched on or switched off, the corresponding interactive element is displayed in a breathing light state.
Optionally, the display module 72 is further configured to display a central lock control, a tail gate control, and a charging port control in the display screen;
when the central control lock control is triggered, the central control lock control central lock model is switched to a miniature vehicle model;
and dynamically displaying the outer frame of the central control lock.
Optionally, the view angle moving module 73 is further configured to pull up the shooting lens of the preset view angle range in response to the pull-out request for the target vehicle, and display an adjusted view angle range including a pull-out path.
Optionally, the display module 72 is further configured to adjust the preset viewing angle to a depression angle, and display a sensor sensing area of the vehicle model; the sensor sensing area is displayed at a position corresponding to a sensor of the target vehicle;
and displaying a multi-level sensing area according to the sensitivity.
Optionally, the display module 72 is further configured to adjust the vehicle model as a viewing angle center from the preset viewing angle to a charging port visible viewing angle in the process of plugging and unplugging the charging gun;
and the charging port displays dynamically.
Optionally, the display module 72 is further configured to perform a drift display on the vehicle model in the display screen within a preset viewing angle range.
Optionally, the display module 72 is further configured to move from a preset viewing angle to a direction of the vehicle state change position to a first viewing angle when the vehicle state of the target vehicle changes; the vehicle state change includes: at least one of a tire pressure state, a tire temperature state, a vehicle door prompting state and a trailer state;
and displaying the changed vehicle state and/or the corresponding environment information.
Optionally, the display module 72 is further configured to generate a function description scenario based on the vehicle model in response to a triggering instruction for turning on a vehicle function of the target vehicle;
and displaying the response state of the vehicle model in the function description scene and/or the corresponding environment information after the vehicle function is started or closed.
Optionally, the display module 72 is further configured to determine that the vehicle door is to be opened according to the detected position relationship between the at least one passenger and the target vehicle when a door opening instruction of the user is received;
when the occupant interferes with the opening of the vehicle door, a vehicle door opening track is displayed, and the interference position of the vehicle door and the target occupant is highlighted.
Fig. 8 is a schematic structural diagram of a vehicle provided in an embodiment of the present application, and as shown in fig. 8, a vehicle device is configured on the vehicle, and the vehicle device includes: a memory 801 and a controller 802.
The memory 801 is used to store computer programs and may be configured to store other various data to support operations on the vehicle devices. Examples of such data include instructions for any application or method operating on the vehicle device, contact data, phone book data, messages, pictures, videos, and so forth.
The Memory 801 may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as Static Random-Access Memory (SRAM), electrically Erasable Programmable Read-Only Memory (EEPROM), erasable Programmable Read-Only Memory (EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The vehicle apparatus further includes: a display device 803. A controller 802, coupled to the memory 801, for executing computer programs in the memory 801 for:
when a target vehicle is in a parking state, rendering environmental information around the target vehicle and a vehicle model into a display picture;
highlighting in the vehicle model, in the form of interactive elements, controllable components of the target vehicle that are visible within the preset viewing angle range;
and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle.
Optionally, the controller 802 is configured to display the environmental object in a semi-transparent state if the environmental object included in the environmental information blocks the vehicle model within the preset view angle range.
Optionally, the controller 802 is configured to adjust the preset viewing angle in response to a rotation, dragging, or zooming operation on the vehicle model, or in response to attention information detected by the vehicle;
after the preset visual angle is adjusted, highlighting the interactive elements corresponding to the controllable components moved into the preset visual angle range;
and after the preset visual angle is adjusted, the interactive elements corresponding to the controllable components which are moved out of the preset visual angle range disappear.
Optionally, the controller 802 is configured to adjust a position or an angle of the vehicle model in the display screen in response to a moving operation or a rotating operation on the vehicle model;
the interactive element disappears during the position or angle adjustment of the vehicle model in the display.
Optionally, the controller 802 is configured to adjust the viewing angle of the position or the angle of the vehicle model in the display screen to a preset viewing angle range if the triggering operation is not received within the time threshold range;
and displaying the interactive elements.
Optionally, the controller 802 is configured to display the vehicle model in an enlarged manner when the interactive element corresponding to the controllable component is triggered; the controllable component includes: at least one of a vehicle door, a tail door and a charging opening cover;
when the controllable component is switched on or switched off, the corresponding interactive element is displayed in a breathing light state.
Optionally, the controller 802 is configured to display a central lock control, a tail gate control, and a charging port control in the display screen;
when the central control lock control is triggered, the central control lock control central lock model is switched to a miniature car model;
and dynamically displaying the outer frame of the central control lock.
Optionally, the controller 802 is configured to determine that the vehicle door is to be opened according to the detected position relationship between the at least one passenger and the target vehicle when a door opening instruction of the user is received;
when the occupant interferes with the opening of the vehicle door, the opening track of the vehicle door is displayed, and the interference position of the vehicle door and the target occupant is highlighted.
Optionally, the controller 802 is configured to pull up the camera lens of the preset viewing angle range in response to the pull-out request for the target vehicle, and display an adjusted viewing angle range including a pull-out path.
Optionally, the controller 802 is configured to adjust the preset view angle to a depression angle, and display a sensor sensing area of the vehicle model; the sensor sensing area is displayed at a position corresponding to a sensor of the target vehicle;
and displaying the multi-level sensing area according to the sensitivity.
Optionally, the controller 802 is configured to adjust the vehicle model as a viewing angle center from the preset viewing angle to a charging port visible viewing angle in the process of plugging and unplugging the charging gun;
and the charging port displays dynamically.
Optionally, the controller 802 is configured to perform a drift display of the vehicle model in the display screen within a preset viewing angle range.
Optionally, the controller 802 is configured to move from a preset viewing angle to a direction of the vehicle state change position to a first viewing angle when the vehicle state of the target vehicle changes; the vehicle state change includes: at least one of a tire pressure state, a tire temperature state, a vehicle door prompting state and a trailer state;
and displaying the changed vehicle state and/or the corresponding environment information.
Optionally, the controller 802 is configured to generate a function description scenario based on the vehicle model in response to a triggering instruction for turning on a vehicle function of the target vehicle;
and displaying the response state of the vehicle model in the function description scene and/or the corresponding environment information after the vehicle function is started or closed.
The display device 803 of fig. 8 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 804 of fig. 8 above, may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
Further, as shown in fig. 8, the vehicle apparatus further includes: communications component 805, power component 806, and the like. Only some of the components are schematically shown in fig. 8, and the vehicular apparatus is not meant to include only the components shown in fig. 3.
The communications component 805 of fig. 8 described above is configured to facilitate communications between the device in which the communications component resides and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi,2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the Communication component may be implemented based on Near Field Communication (NFC) technology, radio Frequency Identification (RFID) technology, infrared Data Association (IrDA) technology, ultra Wide Band (UWB) technology, bluetooth technology, and other technologies.
The power supply 806 provides power to various components of the device in which the power supply is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
Accordingly, the present application further provides a computer program product, and the computer program product can implement the steps in the method embodiment of fig. 2 when being executed.
In the embodiment of the application, when a target vehicle is in a parking state, rendering environmental information around the target vehicle and a vehicle model into a display picture; highlighting in the vehicle model, in the form of interactive elements, controllable components of the target vehicle that are visible within the preset viewing angle range; and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle. By the scheme, the vehicle model and the surrounding environment information are rendered in the display picture, so that a user can timely know the target vehicle and the conditions outside the vehicle in the cabin. Furthermore, controllable components are displayed on the vehicle model, which are controllable via the interaction elements associated with the vehicle model. When the user triggers the interactive element, the controllable component corresponding to the interactive element is the current attention content of the user. Therefore, the preset visual angle range moves towards the direction of the controllable component, and a user can visually know the visual angle adjusting process to obtain better watching experience.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A parking display method, characterized in that the method comprises:
when a target vehicle is in a parking state, rendering environmental information around the target vehicle and a vehicle model into a display picture;
highlighting in the vehicle model, in the form of interactive elements, controllable components of the target vehicle that are visible within a preset range of perspectives;
and responding to the triggering operation of the interactive element, and moving from the preset view angle to a first view angle in the direction of the controllable component corresponding to the triggered interactive element so as to show the triggered controllable component and the environmental information contained in the first view angle.
2. The method of claim 1, wherein rendering environmental information around the target vehicle and a vehicle model into a display comprises:
and if the vehicle model is shielded by the environment object contained in the environment information within the preset visual angle range, displaying the environment object in a semitransparent state.
3. The method of claim 1 or 2, further comprising:
adjusting the preset visual angle in response to rotation, dragging or zooming operation of the vehicle model or in response to attention information detected by a vehicle;
and after the preset visual angle is adjusted, highlighting the interactive elements corresponding to the controllable components which are moved into the preset visual angle range, and eliminating the interactive elements corresponding to the controllable components which are moved out of the preset visual angle range.
4. The method of any of claims 1 to 3, further comprising:
adjusting the position or angle of the vehicle model in a display screen in response to a moving operation or a rotating operation on the vehicle model;
the interactive element disappears during the position or angle adjustment of the vehicle model in the display.
5. The method of any of claims 1 to 4, further comprising:
if the triggering operation is not received within the time threshold range, adjusting the visual angle of the position or the angle of the vehicle model in the display picture to be a preset visual angle range;
and displaying the interactive elements.
6. The method of any one of claims 1 to 5, further comprising:
and responding to a parking-out request or a charging gun plugging request aiming at the target vehicle, pulling up the shooting lens in the preset visual angle range, and displaying an adjusted visual angle range containing a parking-out path or a charging port.
7. The method of any of claims 1 to 6, further comprising:
adjusting the preset visual angle to a depression angle, and displaying a sensor sensing area of the vehicle model; the sensor sensing area is displayed at a position corresponding to a sensor of the target vehicle;
and displaying the multi-level sensing area according to the sensitivity.
8. The method of any of claims 1 to 7, further comprising:
when a door opening instruction of a user is received, determining that a vehicle door is to be opened according to the detected position relation between at least one passenger and the target vehicle;
when the occupant interferes with the opening of the vehicle door, the opening track of the vehicle door is displayed, and the interference position of the vehicle door and the target occupant is highlighted.
9. A vehicle terminal, comprising: a memory, a processor;
the memory to store one or more computer instructions;
the processor is configured to execute the one or more computer instructions for performing the steps in the method of any of claims 1-8.
10. A vehicle characterized in that it is equipped with a vehicle terminal according to claim 9.
CN202211321625.9A 2022-10-26 2022-10-26 Parking display method, vehicle terminal, vehicle and product Pending CN115848271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211321625.9A CN115848271A (en) 2022-10-26 2022-10-26 Parking display method, vehicle terminal, vehicle and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211321625.9A CN115848271A (en) 2022-10-26 2022-10-26 Parking display method, vehicle terminal, vehicle and product

Publications (1)

Publication Number Publication Date
CN115848271A true CN115848271A (en) 2023-03-28

Family

ID=85661891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211321625.9A Pending CN115848271A (en) 2022-10-26 2022-10-26 Parking display method, vehicle terminal, vehicle and product

Country Status (1)

Country Link
CN (1) CN115848271A (en)

Similar Documents

Publication Publication Date Title
US11040620B2 (en) User interface apparatus for vehicle, and vehicle
CN109933063B (en) Vehicle control device provided in vehicle and vehicle control method
EP3095635B1 (en) Display apparatus and method for controlling the same
US10190357B2 (en) Apparatus, system, and method for preventing vehicle door related accidents
KR101730315B1 (en) Electronic device and method for image sharing
US10884412B2 (en) Autonomous vehicle and method of controlling the same
US9965169B2 (en) Systems, methods, and apparatus for controlling gesture initiation and termination
US20190315275A1 (en) Display device and operating method thereof
EP3506236B1 (en) Vehicle state control device and method, and vehicle
KR101630153B1 (en) Gesture recognition apparatus, vehicle having of the same and method for controlling of vehicle
CN110001547B (en) Input/output device and vehicle including the same
CN111183067B (en) Parking control method and parking control device
US10983691B2 (en) Terminal, vehicle having the terminal, and method for controlling the vehicle
US20210072831A1 (en) Systems and methods for gaze to confirm gesture commands in a vehicle
CN112406780A (en) Vehicle door control method, control device, vehicle and storage medium
CN107627969A (en) Change the method, apparatus and computer-readable storage medium of body color
US10723359B2 (en) Methods, systems, and media for controlling access to vehicle features
CN114555401A (en) Display system, display device, display method, and mobile device
US10067341B1 (en) Enhanced heads-up display system
KR101841501B1 (en) Mobile device for car sharing and method for car sharing system
CN109484328B (en) User interface device for vehicle
CN209870245U (en) Vehicle display device and vehicle
CN115848271A (en) Parking display method, vehicle terminal, vehicle and product
CN107458299A (en) Vehicle light control method, device and computer-readable recording medium
KR102480704B1 (en) Apparatus for user-interface for a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination