CN109427220B - Virtual reality-based display method and system - Google Patents

Virtual reality-based display method and system Download PDF

Info

Publication number
CN109427220B
CN109427220B CN201710759049.9A CN201710759049A CN109427220B CN 109427220 B CN109427220 B CN 109427220B CN 201710759049 A CN201710759049 A CN 201710759049A CN 109427220 B CN109427220 B CN 109427220B
Authority
CN
China
Prior art keywords
virtual reality
information
display
image
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710759049.9A
Other languages
Chinese (zh)
Other versions
CN109427220A (en
Inventor
李炜
孙其民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inlife Handnet Co Ltd
Original Assignee
Inlife Handnet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inlife Handnet Co Ltd filed Critical Inlife Handnet Co Ltd
Priority to CN201710759049.9A priority Critical patent/CN109427220B/en
Publication of CN109427220A publication Critical patent/CN109427220A/en
Application granted granted Critical
Publication of CN109427220B publication Critical patent/CN109427220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention discloses a display method and a display system based on virtual reality, wherein the method comprises the following steps: acquiring a starting point position, a destination position, vehicle information and body information of a target person; obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information; acquiring a fixation focus and a display range according to the vehicle information and the body information; and adjusting the display content of the virtual reality image according to the gazing focus and the display range. Aiming at different vehicle information and body information, the aimed virtual reality image is provided, and the reality of simulating driving is improved.

Description

Virtual reality-based display method and system
Technical Field
The invention relates to the technical field of communication, in particular to a display method and a display system based on virtual reality.
Background
The present automobile simulation driving provides a simulation driving operation device, and a display device is matched to provide a simulation driving experience for users. However, the display device is a flat display device, which displays simple road information and environmental information, and the experience effect is not good.
The virtual reality technology is a computer simulation system capable of creating and experiencing a virtual world, utilizes a computer to generate a simulation environment, and is system simulation of multi-source information fusion, interactive three-dimensional dynamic views and entity behaviors. The virtual reality technology is combined with the automobile simulated driving, so that the effect of the simulated driving can be realized, but the virtual reality technology and the automobile simulated driving are simply combined at present, so that the simulation effect is poor.
Disclosure of Invention
The invention aims to provide a display method and a display system based on virtual reality, which can provide different virtual reality images according to different users, enhance the effect of simulated driving and are closer to real driving.
In order to solve the above problem, an embodiment of the present invention provides a display method based on virtual reality, where the method includes:
acquiring a starting point position, a destination position, vehicle information and body information of a target person;
obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information;
acquiring a fixation focus and a display range according to the vehicle information and the body information; and
and adjusting the display content of the virtual reality image according to the watching focus and the display range.
Similarly, to solve the above problem, an embodiment of the present invention further provides a display system based on virtual reality, where the display system includes:
initial information acquisition means for acquiring a start point position, a destination position, vehicle information, and body information of a target person;
the first acquisition device is used for acquiring route information according to the starting point position and the destination position and acquiring a virtual reality image corresponding to the route information according to the route information;
the second acquisition device is used for acquiring a watching focus and a display range according to the vehicle information and the body information; and
and the adjusting device is used for adjusting the display content of the virtual reality image according to the watching focus and the display range.
The method comprises the steps of firstly, obtaining a starting point position, a destination position, vehicle information and body information of a target person; then obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information; then, acquiring a fixation focus and a display range according to the vehicle information and the body information; and finally, adjusting the display content of the virtual reality image according to the watching focus and the display range. Aiming at different vehicle information and body information, the aimed virtual reality image is provided, and the reality of simulating driving is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a virtual reality-based presentation method according to an embodiment of the present invention;
FIG. 2 is another flow chart of a virtual reality-based presentation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a virtual reality based presentation system according to an embodiment of the present invention;
FIG. 4 is another schematic diagram of a virtual reality based presentation system according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of a virtual reality image according to an embodiment of the invention;
fig. 6 is a schematic diagram of a virtual reality-based presentation server according to an embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description that follows, specific embodiments of the present invention are described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is to be understood that various steps and operations described hereinafter may be implemented in hardware.
The term "module" as used herein may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein are preferably implemented in software, but may also be implemented in hardware, and are within the scope of the present invention.
The embodiment of the invention provides a display method and a display system based on virtual reality.
In this embodiment, a display method based on virtual reality includes: acquiring a starting point position, a destination position, vehicle information and body information of a target person; obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information; acquiring a fixation focus and a display range according to the vehicle information and the body information; and adjusting the display content of the virtual reality image according to the gazing focus and the display range.
Referring to fig. 1, fig. 1 is a flowchart of a virtual reality-based display method according to an embodiment of the present invention, where the method includes step S101, step S102, step S103, and step S104.
Specifically, the product maintenance method based on virtual reality includes step S101: acquiring a starting point position, a destination position, vehicle information and body information of a target person;
the virtual reality-based product maintenance method comprises the following steps of S102: obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information;
the product maintenance method based on virtual reality comprises the following steps of S103: acquiring a fixation focus and a display range according to the vehicle information and the body information;
the virtual reality-based product maintenance method comprises the following steps of S104: and adjusting the display content of the virtual reality image according to the watching focus and the display range.
In step S101, a start point position, a destination position, vehicle information, and body information of a target person are acquired. The start point position, the destination position, the vehicle information, and the body information may be acquired through voice input. For example, by means of keywords and data, the "door of the market at the starting point a", "iron notch at the destination B", "car C", and "height 175 cm", wherein the default current position is not input at the starting point position, and the current position can be obtained by the positioning system. The information can be obtained through key input, option selection and the like.
In step S102, route information is acquired according to the starting point position and the destination position, and a virtual reality image corresponding to the route information is acquired according to the route information.
The route information is automatically acquired according to the starting point position and the destination position, and the route information comprises different types of modes such as shortest distance, fastest arrival and the like, and can also be divided into whether the route information comprises an expressway. And then acquiring a virtual reality image corresponding to the route information according to the route information. All virtual reality images corresponding to the route information can be downloaded, or only the virtual reality image of the initial point position and the first part of the route information can be downloaded, and then the virtual reality image corresponding to the next section of the route is obtained in real time according to the change of the virtual position.
In step S103, a gaze focus and a display range are acquired from the vehicle information and the body information.
Specifically, the vehicle information includes vehicle type information and seat height information; the vehicle type information comprises a common car, an SUV (sports utility vehicle), an off-road vehicle, trucks of different models and the like, and can also comprise corresponding brands and specific models, the vehicle type information can be acquired by receiving manual input of a user, and multi-level options can be provided for the user to select and acquire. After the specific model information is acquired, the chassis height, the seat height, the size and angle height of the front window glass and the size and angle height of the side window glass of the vehicle are acquired corresponding to the model information. But may also include top speed, launch rate, etc.
The body information includes head position, height difference information of eyes and seats, and gazing direction. The position of the eyes in the vehicle is determined by the head position and the height difference information of the eyes and the seat, and the position comprises height, angle and the like.
Determining the position of a first pupil center of a first eyeball and the position of a second pupil center of a second eyeball in real time in the angle change process of the head and/or the eyeballs; fitting the position of a first retina of a first eyeball and the position of a second retina of a second eyeball according to the structure of the eyes; respectively fitting a first connecting line of the first pupil center and the first retina and a second connecting line of the second pupil center and the second retina; and taking the intersection point of the first connecting line and the second connecting line as the fixation focus of the eyes.
And then, calculating the range and the angle of the visual field according to the gazing focus, the body information and the vehicle information to obtain a corresponding display range.
In step S104, the display content of the virtual reality image is adjusted according to the gaze focus and the display range.
And adjusting the display content of the virtual reality image according to different gazing focuses and display ranges of the virtual reality image, such as adjusting the display range and the display angle of the virtual reality image.
According to the embodiment of the invention, because different vehicles selected by different users are different, the heights of the users are different, and the actual driving pictures are different, the specific virtual reality image is provided for different vehicle information and body information, and the reality of driving simulation is improved. For example, for a user with a high height, the position of eyes is higher than that of a vehicle window, and in addition, the height of a vehicle chassis is higher, images obtained through the vehicle window are richer than those below other users, and the number of images above the vehicle window is less, so that the requirements of actual driving are better met. When the vehicle is initially learned, a user can adapt to the real vehicle more easily, the fear of driving is overcome easily in the initial on-road stage, the user can be familiar with road conditions first, the user is not easy to be nervous, and the adaptation period is shortened.
Further, the adjusting the display content of the virtual reality image according to the gaze focus and the display range includes:
the virtual reality image comprises a first display image within a first distance of the watching focus and a second display image beyond the first distance of the watching focus, and the definition of the first display image is adjusted to be higher than that of the second display image.
The first display image within the first distance of the focus is a key attention area of the user, and the second display image outside the first distance of the focus is an area which is not focused by the user, so that the storage space, the downloading time and the flow can be saved. The emphasis is given to the user. The first distance can be adjusted according to the distance of the gazing focus, and the first distance can be adjusted according to the proportion of the display content of the virtual reality image, for example, half of the display content is the first display image, and the other parts are the second display images by taking the gazing focus as the center. Similarly, display images with several definition levels can be set, for example, a first display image within a first distance of a focus of attention is an ultra-high definition image, a second display image within a second distance beyond the first distance of the focus of attention is a high definition image, and a third display image beyond the second distance of the focus of attention is a standard definition image. It should be noted that not only the ranges of two-level and three-level definitions can be set, but also ranges of more levels of definitions can be set, such as four levels, five levels, six levels, and the like.
Further, the display method based on virtual reality further comprises the following steps:
detecting a deflection angle and a steering of a current visual angle of the target person relative to a reference visual angle;
when the deflection angle is larger than a preset threshold value, generating an image adjusting instruction based on the deflection angle and steering;
and adjusting the display content of the virtual reality image according to the image adjusting instruction so that the target person can display the display content beyond the current visual angle in the virtual reality image.
When the gazing direction deflects from the middle to the side or the upper or the lower part, the virtual reality image is switched, but the deflection angle of the gazing direction is limited, the deflection angle and the steering of the current visual angle of the target person relative to the reference visual angle are detected, when the deflection angle is larger than a preset threshold value, the image to be seen cannot be seen or is not very clear, and the head deflection and/or the eye deflection are kept tired, at the moment, if the deflection angle is larger than the preset threshold value, an image adjusting instruction is generated based on the deflection angle and the steering, and the display content of the virtual reality image is adjusted according to the image adjusting instruction, so that the target person can display the display content of the virtual reality image beyond the current visual angle in the virtual reality image, namely the display content of the virtual reality image is displayed beyond the deflection angle. More virtual reality images are displayed to the user without the need for the user to maintain a large degree of deflection. The preset angle threshold may be 60 degrees or 80 degrees, and different angles are set according to different users, for example, a young person may set a larger angle, and an old person may set a smaller angle.
Referring to fig. 2, fig. 2 is another flowchart of a virtual reality-based display method according to an embodiment of the present invention, where the method includes step S201, step S202, step S203, step S204, step S205, step S206, and step S207. Step S201, step S202, step S203, and step S204 in fig. 2 are the same as step S101, step S102, step S103, and step S104 in fig. 1, respectively, and are not described again here.
In this embodiment, in step S205, the virtual reality images are switched at different rates according to the control instruction;
in step S206, a stay time during which the gaze focus stays at a target position of the virtual reality image is detected;
in step S207, after the staying time exceeds a preset time, displaying corresponding information of the target position, and simultaneously reducing a rate of switching the virtual reality image.
When the driving of the vehicle is being simulated, the virtual reality images can be switched at different rates according to control instructions; and simulating a real driving scene. The switching rate is matched with the vehicle speed. The control command can be input by voice or by a simulated driving device, such as a simulated accelerator device, a simulated brake device and the like. And when the fact that the staying time of the gazing focus at the target position of the virtual reality image exceeds the preset time is detected, displaying corresponding information of the target position, and reducing the speed of switching the virtual reality image. The method comprises the steps that a user is interested in a certain target, such as a certain park, a certain shop and the like, at the moment, if the fact that the staying time of a target position where a focus of the user stays exceeds the preset time is detected, corresponding information of the target position is displayed, the target position is introduced through characters or voice, meanwhile, the speed of switching virtual reality images is reduced, namely the speed of the vehicle is reduced, the actual driving condition of the user is simulated, and the safety meaning of the user is improved. Reducing the rate at which the virtual reality images are switched can reduce to 0, i.e., stop the switching, equivalent to stopping the vehicle. Items that block the view of the user, such as the roof, doors, etc. of the vehicle, may be removed at this point.
In the above embodiment, the method may adjust the virtual reality image by using the simulation operation device. The simulation operation device can comprise one or more of a steering wheel, an accelerator module, a brake module, a clutch module, a hand brake, a gear module, a seat and the like, the display content of the virtual reality image is adjusted through the simulation operation device, and the rate of the virtual reality image can be switched through the simulation operation device. The display content and the switching rate of the virtual reality image can be adjusted through voice, text input and the like.
Referring to fig. 3, fig. 3 is a schematic view of a virtual reality-based presentation system according to an embodiment of the present invention. This virtual reality based display system includes: initial information acquisition means 301, first acquisition means 302, second acquisition means 303 and adjustment means 304.
Wherein, the initial information acquiring means 301 is used for acquiring the starting point position, the destination position, the vehicle information and the body information of the target person.
The start point position, the destination position, the vehicle information, and the physical information of the target person can be acquired through voice input. For example, by means of keywords and data, the "door of the market at the starting point a", "iron notch at the destination B", "car C", and "height 175 cm", wherein the default current position is not input at the starting point position, and the current position can be obtained by the positioning system. The information can be obtained through key input, option selection and the like.
A first obtaining device 302, configured to obtain route information according to the starting point location and the destination location, and obtain a virtual reality image corresponding to the route information according to the route information.
The route information is automatically acquired according to the starting point position and the destination position, and the route information comprises different types of modes such as shortest distance, fastest arrival and the like, and can also be divided into whether the route information comprises an expressway. And then acquiring a virtual reality image corresponding to the route information according to the route information. All virtual reality images corresponding to the route information can be downloaded, or only the virtual reality image of the initial point position and the first part of the route information can be downloaded, and then the virtual reality image corresponding to the next section of the route is obtained in real time according to the change of the virtual position.
Second obtaining means 303, configured to obtain a gaze focus and a display range according to the vehicle information and the body information.
Specifically, the vehicle information includes vehicle type information and seat height information; the vehicle type information comprises a common car, an SUV (sports utility vehicle), an off-road vehicle, trucks of different models and the like, and can also comprise corresponding brands and specific models, the vehicle type information can be acquired by receiving manual input of a user, and multi-level options can be provided for the user to select and acquire. After the specific model information is acquired, the chassis height, the seat height, the size and angle height of the front window glass and the size and angle height of the side window glass of the vehicle are acquired corresponding to the model information. But may also include top speed, launch rate, etc.
The body information includes head position, height difference information of eyes and seats, and gazing direction. The position of the eyes in the vehicle is determined by the head position and the height difference information of the eyes and the seat, and the position comprises height, angle and the like.
Determining the position of a first pupil center of a first eyeball and the position of a second pupil center of a second eyeball in real time in the angle change process of the head and/or the eyeballs; fitting the position of a first retina of a first eyeball and the position of a second retina of a second eyeball according to the structure of the eyes; respectively fitting a first connecting line of the first pupil center and the first retina and a second connecting line of the second pupil center and the second retina; and taking the intersection point of the first connecting line and the second connecting line as the fixation focus of the eyes.
And then, calculating the range and the angle of the visual field according to the gazing focus, the body information and the vehicle information to obtain a corresponding display range.
Adjusting means 304, configured to adjust display content of the virtual reality image according to the gazing focus and the display range.
And adjusting the display content of the virtual reality image according to different gazing focuses and display ranges of the virtual reality image, such as adjusting the display range and the display angle of the virtual reality image.
According to the embodiment of the invention, because different vehicles selected by different users are different, the heights of the users are different, and the actual driving pictures are different, the specific virtual reality image is provided for different vehicle information and body information, and the reality of driving simulation is improved. For example, for a user with a high height, the position of eyes is higher than that of a vehicle window, and in addition, the height of a vehicle chassis is higher, images obtained through the vehicle window are richer than those below other users, and the number of images above the vehicle window is less, so that the requirements of actual driving are better met. When the vehicle is initially learned, a user can adapt to the real vehicle more easily, the fear of driving is overcome easily in the initial on-road stage, the user can be familiar with road conditions first, the user is not easy to be nervous, and the adaptation period is shortened.
Further, the virtual reality image includes a first display image within a first distance of the gaze focus, and a second display image outside the first distance of the gaze focus.
The adjusting device is further used for adjusting the definition of the first display image to be higher than that of the second display image.
The first display image within the first distance of the focus is a key attention area of the user, and the second display image outside the first distance of the focus is an area which is not focused by the user, so that the storage space, the downloading time and the flow can be saved. The emphasis is given to the user. The first distance can be adjusted according to the distance of the gazing focus, and the first distance can be adjusted according to the proportion of the display content of the virtual reality image, for example, half of the display content is the first display image, and the other parts are the second display images by taking the gazing focus as the center. Similarly, display images with several definition levels can be set, for example, a first display image within a first distance of a focus of attention is an ultra-high definition image, a second display image within a second distance beyond the first distance of the focus of attention is a high definition image, and a third display image beyond the second distance of the focus of attention is a standard definition image. It should be noted that not only the ranges of two-level and three-level definitions can be set, but also ranges of more levels of definitions can be set, such as four levels, five levels, six levels, and the like.
Further, the virtual reality-based display system further comprises a first detection device and an instruction generation device.
The first detection device is used for detecting the deflection angle and the steering of the current visual angle of the target person relative to the reference visual angle;
the instruction generating device is used for generating an image adjusting instruction based on the deflection angle and the steering when the deflection angle is larger than a preset threshold value;
the adjusting device is further used for adjusting the display content of the virtual reality image according to the image adjusting instruction so that the target person can display the display content beyond the current visual angle in the virtual reality image.
When the gazing direction deflects from the middle to the side or the upper or the lower part, the virtual reality image is switched, but the deflection angle of the gazing direction is limited, the deflection angle and the steering of the current visual angle of the target person relative to the reference visual angle are detected, when the deflection angle is larger than a preset threshold value, the image to be seen cannot be seen or is not very clear, and the head deflection and/or the eye deflection are kept tired, at the moment, if the deflection angle is larger than the preset threshold value, an image adjusting instruction is generated based on the deflection angle and the steering, and the display content of the virtual reality image is adjusted according to the image adjusting instruction, so that the target person can display the display content of the virtual reality image beyond the current visual angle in the virtual reality image, namely the display content of the virtual reality image is displayed beyond the deflection angle. More virtual reality images are displayed to the user without the need for the user to maintain a large degree of deflection. The preset angle threshold may be 60 degrees or 80 degrees, and different angles are set according to different users, for example, a young person may set a larger angle, and an old person may set a smaller angle.
Referring to fig. 4, fig. 4 is another schematic diagram of a virtual reality-based product repair system according to an embodiment of the present invention. Virtual reality-based product repair system, comprising: initial information acquisition means 401, first acquisition means 402, second acquisition means 403, adjustment means 404, switching means 405, second detection means 406 and processing means 407.
The initial information obtaining device 401, the first obtaining device 402, the second obtaining device 403, and the adjusting device 404 in fig. 4 are respectively the same as the initial information obtaining device 301, the first obtaining device 302, the second obtaining device 303, and the adjusting device 304 in fig. 3, and are not described herein again.
In this embodiment, the switching device 405 is configured to switch the virtual reality images at different rates according to a control instruction;
second detecting means 406 for detecting a dwell time during which the gaze focus dwells at a target position of the virtual reality image;
and the processing device 407 is configured to display corresponding information of the target position and reduce a rate of switching the virtual reality image when the staying time exceeds a preset time.
When the driving of the vehicle is being simulated, the virtual reality images can be switched at different rates according to control instructions; and simulating a real driving scene. The switching rate is matched with the vehicle speed. The control command can be input by voice or by a simulated driving device, such as a simulated accelerator device, a simulated brake device and the like. And when the fact that the staying time of the gazing focus at the target position of the virtual reality image exceeds the preset time is detected, displaying corresponding information of the target position, and reducing the speed of switching the virtual reality image. The method comprises the steps that a user is interested in a certain target, such as a certain park, a certain shop and the like, at the moment, if the fact that the staying time of a target position where a focus of the user stays exceeds the preset time is detected, corresponding information of the target position is displayed, the target position is introduced through characters or voice, meanwhile, the speed of switching virtual reality images is reduced, namely the speed of the vehicle is reduced, the actual driving condition of the user is simulated, and the safety meaning of the user is improved. Reducing the rate at which the virtual reality images are switched can reduce to 0, i.e., stop the switching, equivalent to stopping the vehicle. Items that block the view of the user, such as the roof, doors, etc. of the vehicle, may be removed at this point.
It should be noted that, in the above embodiment, the system may further include a simulation operation device, where the simulation operation device may include one or more of a steering wheel, an accelerator module, a brake module, a clutch module, a handbrake, a gear module, a seat, and the like, and the display content of the virtual reality image is adjusted by the simulation operation device, and the virtual reality image may also be switched by adjusting the rate of the virtual reality image by the simulation operation device. The display content and the switching rate of the virtual reality image can be adjusted through voice, text input and the like.
Referring to fig. 5, fig. 5 is a schematic diagram of a virtual reality image according to an embodiment of the present invention, which is implemented according to a virtual reality-based display method or a virtual reality-based display system.
The method comprises the steps of obtaining a virtual reality image 501 according to route information, and obtaining different display ranges according to different vehicle information and body information, as shown in the figure, an image in a virtual frame 502 is display content of an adjusted virtual reality image, an image in a virtual frame 503 is display content of another adjusted virtual reality image, and an image in the virtual frame 502 is an image obtained by the eyes are higher from the bottom of the interior of the vehicle. The image in the virtual frame 503 is an image obtained by focusing on the lower eye of the vehicle interior from the bottom surface and the position of the eye of the vehicle interior to a point further forward. Therefore, different virtual reality images are provided for the user according to different vehicles, different users and positions of the different users, and authenticity is improved.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the information sharing method, and are not described herein again.
The virtual reality-based display method according to the present embodiment can be implemented by a virtual reality-based display system.
In a specific implementation, the above apparatuses may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above apparatuses may refer to the foregoing method embodiments, which are not described herein again.
The virtual reality-based presentation system may specifically be integrated in a virtual reality device.
Accordingly, an embodiment of the present invention further provides a virtual reality-based presentation server, as shown in fig. 6, the virtual reality-based presentation server may include a Radio Frequency (RF) circuit 601, a memory 602 including one or more computer-readable storage media, an input unit 603, a display unit 604, a sensor 605, an audio circuit 606, a Wireless Fidelity (WiFi) module 607, a processor 608 including one or more processing cores, and a power supply 609. Those skilled in the art will appreciate that the virtual reality based presentation server architecture shown in FIG. 6 does not constitute a limitation of virtual reality based presentation servers and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the RF circuit 601 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages by one or more processors 608; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuit 601 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 601 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 602 may be used to store software programs and modules, and the processor 608 executes various functional applications and data processing by operating the software programs and modules stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a virtual reality image of a corresponding route, etc.), and the like; the storage data area may store data (such as component information, maintenance information, etc.) created according to the use of the virtual reality-based presentation server, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 608 and the input unit 603 access to the memory 602.
The input unit 603 may be used to receive input numeric or character information and generate a microphone, a touch screen, a body-sensing input device, a keyboard, a mouse, a joystick, an optical or trackball signal input in relation to user setting and function control. In particular, in one particular embodiment, input unit 603 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 608, and can receive and execute commands sent by the processor 608. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 603 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 604 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 604 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 608 to determine the type of touch event, and the processor 608 then provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 6 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The virtual reality-based presentation server may also include at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the virtual reality product maintenance moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here. It is to be understood that it does not necessarily constitute a virtual reality-based presentation server, and may be omitted as needed within a scope that does not change the essence of the invention.
Audio circuitry 606, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 606 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 606 and converted into audio data, which is then processed by the audio data output processor 608, and then transmitted to, for example, another terminal via the RF circuit 601, or the audio data is output to the memory 602 for further processing. The audio circuit 606 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 607, and provides wireless broadband internet access for the user. Although fig. 6 shows the WiFi module 607, it is understood that it does not belong to the essential constitution of the virtual reality-based presentation server, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 608 is a control center of the terminal, connects various parts of the entire virtual reality-based presentation server using various interfaces and lines, and performs various functions of the virtual reality-based presentation server and processes data by running or executing software programs and/or modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the virtual reality-based presentation server. Optionally, processor 608 may include one or more processing cores; preferably, the processor 608 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 608.
The virtual reality based presentation server further comprises a power supply 609 (e.g., a battery) for supplying power to the various components, preferably, the power supply may be logically connected to the processor 608 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The power supply 609 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the virtual reality-based presentation server may further include a camera, a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the processor 608 in the virtual reality-based presentation server loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and the processor 608 runs the application programs stored in the memory 602, so as to implement various functions:
acquiring a starting point position, a destination position, vehicle information and body information; obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information; acquiring a fixation focus and a display range according to the vehicle information and the body information; and adjusting the display content of the virtual reality image according to the gazing focus and the display range.
Preferably, the processor 608 is further configured to adjust the definition of the first display image to be higher than the definition of the second display image, where the virtual reality image includes the first display image within the first distance of the gazing focus and the second display image outside the first distance of the gazing focus.
Preferably, the processor 608 is further configured to, the vehicle information includes vehicle type information, seat height information; the body information comprises head position, height difference information of eyes and a seat and a gazing direction; and acquiring a gazing focus and a display range according to the vehicle type information, the seat height information, the head position, the height difference information between the eyes and the seat and the gazing direction.
Preferably, the processor 608 is further configured to display the display content of the virtual reality image beyond the angle between the gazing direction and the eye if the angle between the gazing direction and the eye exceeds a preset angle threshold.
Preferably, the processor 608 is further configured to switch the virtual reality image at different rates according to a control instruction; detecting a dwell time for the gaze focus to dwell at a target location of the virtual reality image; and when the staying time exceeds the preset time, displaying corresponding information of the target position, and simultaneously reducing the speed of switching the virtual reality image.
As can be seen from the above, in the terminal provided in this embodiment, the start point position, the destination position, the vehicle information, and the body information are first acquired; then obtaining route information according to the starting point position and the destination position, and obtaining a virtual reality image corresponding to the route information according to the route information; then, acquiring a fixation focus and a display range according to the vehicle information and the body information; and finally, adjusting the display content of the virtual reality image according to the watching focus and the display range. Aiming at different vehicle information and body information, the aimed virtual reality image is provided, and the reality of simulating driving is improved.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the information sharing method, and are not described herein again.
The display system based on virtual reality provided by the embodiment of the present invention is, for example, a computer, a tablet computer, a mobile phone with a touch function, and the like, and the display system based on virtual reality and the display method based on virtual reality in the above embodiments belong to the same concept, and any method provided in the embodiment of the display method based on virtual reality can be run on the display system based on virtual reality, and the specific implementation process thereof is described in the embodiment of the display method based on virtual reality, and is not described herein again.
It should be noted that, for the virtual reality-based display method of the present invention, a person skilled in the art may understand that all or part of the process of implementing the virtual reality-based display method according to the embodiments of the present invention may be completed by controlling related hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, for example, in a memory of a virtual reality-based display server, and executed by at least one processor in the virtual reality-based display server, and during the execution process, the process of implementing the embodiment of the virtual reality-based display method may be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the display system based on virtual reality in the embodiment of the present invention, each functional module may be integrated in one processing chip, or each module may exist alone physically, or two or more modules are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The display method and system based on virtual reality provided by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A virtual reality-based display method is characterized by comprising the following steps:
acquiring a starting point position, a destination position, vehicle information and body information of a target person;
obtaining route information according to the starting point position and the destination position, downloading a virtual reality image of the starting point position and the first part of the route information, and obtaining a virtual reality image corresponding to the next section of the route according to the change of the virtual position;
acquiring a watching focus of a user, and calculating a range and an angle of a visual field according to the watching focus, the vehicle information and the body information, and acquiring a corresponding display range; and
adjusting the display content of the virtual reality image according to the watching focus and the display range;
detecting a deflection angle and a steering of a current visual angle of the target person relative to a reference visual angle;
when the deflection angle is larger than a preset threshold value, generating an image adjusting instruction based on the deflection angle and steering;
and adjusting the display content of the virtual reality image according to the image adjusting instruction so that the target person can see the display content of the virtual reality image beyond the current visual angle.
2. The virtual reality-based presentation method according to claim 1, wherein the adjusting the display content of the virtual reality image according to the gaze focus and the display range comprises:
the virtual reality image comprises a first display image within a first distance of the watching focus and a second display image beyond the first distance of the watching focus, and the definition of the first display image is adjusted to be higher than that of the second display image.
3. The virtual reality-based presentation method according to claim 1, wherein the obtaining of the gaze focus and the display range according to the vehicle information and the body information comprises:
the vehicle information comprises vehicle type information and seat height information;
the body information comprises head position, height difference information of eyes and a seat and a gazing direction;
and acquiring a gazing focus and a display range according to the vehicle type information, the seat height information, the head position, the height difference information between the eyes and the seat and the gazing direction.
4. The virtual reality-based presentation method of claim 1, wherein the method further comprises:
switching the virtual reality images according to different rates according to a control instruction;
detecting a dwell time for the gaze focus to dwell at a target location of the virtual reality image;
and when the staying time exceeds the preset time, displaying corresponding information of the target position, and simultaneously reducing the speed of switching the virtual reality image.
5. A virtual reality-based presentation system, comprising:
initial information acquisition means for acquiring a start point position, a destination position, vehicle information, and body information of a target person;
the first acquisition device is used for acquiring route information according to the starting point position and the destination position, downloading a virtual reality image of the starting point position and the most front part of the route information, and acquiring a virtual reality image corresponding to the next section of route according to the change of the virtual position;
the second acquisition device is used for acquiring a watching focus of a user, calculating a range and an angle of a visual field according to the watching focus, the vehicle information and the body information, and obtaining a corresponding display range; and
adjusting means for adjusting display contents of the virtual reality image according to the gaze focus and the display range;
first detection means for detecting a deflection angle and a turning of a current angle of view of the target person with respect to a reference angle of view;
the instruction generating device is used for generating an image adjusting instruction based on the deflection angle and the steering when the deflection angle is larger than a preset threshold value;
the adjusting device is further used for adjusting the display content of the virtual reality image according to the image adjusting instruction so that the target person can view the display content of the virtual reality image beyond the current visual angle.
6. The virtual reality-based presentation system of claim 5, wherein the virtual reality images include a first display image within a first distance of the gaze focus, a second display image outside the first distance of the gaze focus;
the adjusting device is further used for adjusting the definition of the first display image to be higher than that of the second display image.
7. The virtual reality based presentation system of claim 5, wherein the vehicle information comprises vehicle type information, seat height information; the body information comprises head position, height difference information of eyes and a seat and a gazing direction;
the adjusting device is further used for obtaining a watching focus and a display range according to the vehicle type information, the seat height information, the head position, the height difference information between the eyes and the seat and the watching direction.
8. The virtual reality based presentation system of claim 5, wherein the system further comprises:
the switching device is used for switching the virtual reality images according to different rates according to a control instruction;
second detection means for detecting a stay time during which the gazing focus stays at a target position of the virtual reality image;
and the processing device is used for displaying corresponding information of the target position and reducing the speed of switching the virtual reality image when the staying time exceeds the preset time.
CN201710759049.9A 2017-08-29 2017-08-29 Virtual reality-based display method and system Active CN109427220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710759049.9A CN109427220B (en) 2017-08-29 2017-08-29 Virtual reality-based display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710759049.9A CN109427220B (en) 2017-08-29 2017-08-29 Virtual reality-based display method and system

Publications (2)

Publication Number Publication Date
CN109427220A CN109427220A (en) 2019-03-05
CN109427220B true CN109427220B (en) 2021-11-30

Family

ID=65503758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710759049.9A Active CN109427220B (en) 2017-08-29 2017-08-29 Virtual reality-based display method and system

Country Status (1)

Country Link
CN (1) CN109427220B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887372A (en) * 2019-04-16 2019-06-14 北京中公高远汽车试验有限公司 Driving training analogy method, electronic equipment and storage medium
CN116176430B (en) * 2023-05-04 2023-08-29 长城汽车股份有限公司 Virtual key display method and device, vehicle and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101673927B1 (en) * 2015-09-25 2016-11-08 숭실대학교산학협력단 Remote control system and method for vehicle
CN106095089A (en) * 2016-06-06 2016-11-09 郑黎光 A kind of method obtaining interesting target information
CN106338828A (en) * 2016-08-31 2017-01-18 京东方科技集团股份有限公司 Vehicle-mounted augmented reality system, method and equipment
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
CN106445126A (en) * 2016-09-12 2017-02-22 镇江威勒信息技术有限公司 Method and system for exhibit visualization based on VR technology
CN106598252A (en) * 2016-12-23 2017-04-26 深圳超多维科技有限公司 Image display adjustment method and apparatus, storage medium and electronic device
CN106652043A (en) * 2016-12-29 2017-05-10 深圳前海弘稼科技有限公司 Method and device for virtual touring of scenic region
CN106648062A (en) * 2016-10-12 2017-05-10 大连文森特软件科技有限公司 Virtual reality technology and framing processing technology-based tourism landscape realization system
CN106843468A (en) * 2016-12-27 2017-06-13 努比亚技术有限公司 A kind of man-machine interaction method in terminal and VR scenes
DE102015226581A1 (en) * 2015-12-22 2017-06-22 Audi Ag Method for operating a virtual reality system and virtual reality system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323552B (en) * 2015-10-26 2019-03-12 北京时代拓灵科技有限公司 A kind of panoramic video playback method and system
CN105827960A (en) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 Imaging method and device
CN106339980A (en) * 2016-08-22 2017-01-18 乐视控股(北京)有限公司 Automobile-based VR display device and method and automobile
CN106445129A (en) * 2016-09-14 2017-02-22 乐视控股(北京)有限公司 Method, device and system for displaying panoramic picture information
CN106333564A (en) * 2016-11-15 2017-01-18 田元元 Convenient multi-angle dressing mirror

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101673927B1 (en) * 2015-09-25 2016-11-08 숭실대학교산학협력단 Remote control system and method for vehicle
DE102015226581A1 (en) * 2015-12-22 2017-06-22 Audi Ag Method for operating a virtual reality system and virtual reality system
CN106095089A (en) * 2016-06-06 2016-11-09 郑黎光 A kind of method obtaining interesting target information
CN106338828A (en) * 2016-08-31 2017-01-18 京东方科技集团股份有限公司 Vehicle-mounted augmented reality system, method and equipment
CN106445126A (en) * 2016-09-12 2017-02-22 镇江威勒信息技术有限公司 Method and system for exhibit visualization based on VR technology
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
CN106648062A (en) * 2016-10-12 2017-05-10 大连文森特软件科技有限公司 Virtual reality technology and framing processing technology-based tourism landscape realization system
CN106598252A (en) * 2016-12-23 2017-04-26 深圳超多维科技有限公司 Image display adjustment method and apparatus, storage medium and electronic device
CN106843468A (en) * 2016-12-27 2017-06-13 努比亚技术有限公司 A kind of man-machine interaction method in terminal and VR scenes
CN106652043A (en) * 2016-12-29 2017-05-10 深圳前海弘稼科技有限公司 Method and device for virtual touring of scenic region

Also Published As

Publication number Publication date
CN109427220A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN109388297B (en) Expression display method and device, computer readable storage medium and terminal
CN109905754B (en) Virtual gift receiving method and device and storage equipment
US11258893B2 (en) Method for prompting notification message and mobile terminal
WO2016041340A1 (en) An indication method and mobile terminal
CN104395935B (en) Method and apparatus for being presented based on the visual complexity of environmental information come modification information
CN107659637B (en) Sound effect setting method and device, storage medium and terminal
CN107749919B (en) Application program page display method and device and computer readable storage medium
US10636228B2 (en) Method, device, and system for processing vehicle diagnosis and information
US11216997B2 (en) Method and apparatus for displaying historical chat record
CN106127829B (en) Augmented reality processing method and device and terminal
CN109977845B (en) Driving region detection method and vehicle-mounted terminal
CN108958587B (en) Split screen processing method and device, storage medium and electronic equipment
CN107167147A (en) Air navigation aid, glasses and readable storage medium storing program for executing based on arrowband Internet of Things
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
CN111196281A (en) Page layout control method and device for vehicle display interface
CN108196753B (en) Interface switching method and mobile terminal
CN112330756A (en) Camera calibration method and device, intelligent vehicle and storage medium
US11131557B2 (en) Full-vision navigation and positioning method, intelligent terminal and storage device
CN109427220B (en) Virtual reality-based display method and system
CN111399792B (en) Content sharing method and electronic equipment
CN109474747B (en) Information prompting method and mobile terminal
CN106339391B (en) Webpage display method and terminal equipment
CN107729100B (en) Interface display control method and mobile terminal
CN113110487A (en) Vehicle simulation control method and device, electronic equipment and storage medium
WO2015135457A1 (en) Method, apparatus, and system for sending and playing multimedia information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant