CN114168096A - Display method, system, mobile terminal and storage medium of output picture - Google Patents

Display method, system, mobile terminal and storage medium of output picture Download PDF

Info

Publication number
CN114168096A
CN114168096A CN202111496648.9A CN202111496648A CN114168096A CN 114168096 A CN114168096 A CN 114168096A CN 202111496648 A CN202111496648 A CN 202111496648A CN 114168096 A CN114168096 A CN 114168096A
Authority
CN
China
Prior art keywords
display
helmet
image
headset
layer image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111496648.9A
Other languages
Chinese (zh)
Other versions
CN114168096B (en
Inventor
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth New World Technology Co ltd
Original Assignee
Shenzhen Skyworth New World Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth New World Technology Co ltd filed Critical Shenzhen Skyworth New World Technology Co ltd
Priority to CN202111496648.9A priority Critical patent/CN114168096B/en
Publication of CN114168096A publication Critical patent/CN114168096A/en
Application granted granted Critical
Publication of CN114168096B publication Critical patent/CN114168096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons

Abstract

The invention discloses a display method of an output picture, which comprises the following steps: rendering the VR data by using inertial unit data and display parameters of the VR helmet to obtain a display image, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set; storing the display image in a display buffer area; and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected drive plug-in so that the VR helmet outputs an output picture corresponding to the display image. The invention also discloses a split VR head display system, a mobile terminal and a computer readable storage medium. By using the method of the invention, the compatibility of the mobile terminal and the VR helmet is better, thereby improving the display effect of the final output picture.

Description

Display method, system, mobile terminal and storage medium of output picture
Technical Field
The present invention relates to the field of image display, and in particular, to a method and a system for displaying an output image, a mobile terminal, and a computer-readable storage medium.
Background
The split VR/AR head display is a device which can be used for placing operation at a host end, connecting the head display through a USB data line and realizing data acquisition and display of the head display. Split type VR is different from all-in-one, and the split type machine that adopts helmet and split type structure of host computer is another kind and removes VR equipment. Can accomplish helmet and processing platform and functional system and have more reasonable distribution, can accomplish higher level's performance, the independent helmet of split machine and host computer box pass through standard USB TYPE-C line connection, the host computer box is exactly the game machine of a professional level, split TYPE structure can satisfy the needs that remove the experience, can compromise higher performance simultaneously again, host computer box processing system liberates from the very narrow and small space of helmet, can adopt high-performance CPU, and cooling system, the powerful computing power of direct use PC even, therefore the split machine can satisfy the electron competition and the recreation class player of heavy experience and use. In addition, the split product design of the split machine also has various connection playing methods, the helmet can be independently connected with a computer host for use, and the host box can be connected with a television and a computer display.
In order to improve portability, a split type head display system based on a mobile terminal appears, the mobile terminal is connected with a helmet, and various functions of the split type head display system are realized by the action of replacing a host machine with the mobile terminal.
However, the display effect of the final output picture of the conventional split-type head display system is poor.
Disclosure of Invention
The invention mainly aims to provide a display method, a display system, a mobile terminal and a computer readable storage medium of an output picture, and aims to solve the technical problem that the final output picture of a split type head display system in the prior art is poor in display effect.
In order to achieve the above object, the present invention provides a display method of an output picture, which is used for a mobile terminal, and the method includes the following steps:
when VR data are obtained, rendering is carried out on the VR data by using inertial unit data and display parameters of a VR helmet, and a display image is obtained, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set;
storing the display image in a display buffer, wherein the display buffer is configured according to the display parameters;
and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected driving plug-in so as to enable the VR helmet to output an output picture corresponding to the display image.
Optionally, before the step of obtaining the display image by rendering the VR data using the inertial unit data and the display parameters of the VR headset when obtaining the VR data, the method further includes:
when the VR helmet is monitored to be accessed, acquiring equipment information of the VR helmet;
and determining a selected drive plug-in corresponding to the VR helmet in a preset drive plug-in set by using the device information.
Optionally, before the step of acquiring the device information of the VR headset when the access of the VR headset is monitored, the method further includes:
performing equipment description type definition operation on the VR helmet by using a preset equipment access specification;
defining operation of a control endpoint, a display endpoint and an inertia measurement unit is carried out on the VR helmet;
monitoring hot plug of the VR helmet by using the equipment description type of the VR helmet;
the step of obtaining device information of the VR helmet when monitoring that the VR helmet is accessed includes:
when the VR headset access is monitored, determining whether the VR headset matches preset object filtering rules, wherein the object filtering rules are obtained based on the equipment description type of the VR headset;
when the VR headset is matched with the object filtering rule, acquiring device information of the VR headset.
Optionally, the display buffer includes a main display buffer and an auxiliary display buffer, and the display image includes a main viewing layer image and an auxiliary viewing layer image; the step of rendering the VR data by using inertial unit data and display parameters of the VR helmet to obtain a display image comprises the following steps:
determining visual angle information by using the inertial unit data;
obtaining a main view layer image by using the view information, the VR data and the configuration parameters of the main display buffer area;
generating an auxiliary visual angle layer image by using the main visual angle layer image and the configuration parameters of the auxiliary buffer area;
the step of storing the display image in a display buffer comprises:
storing the main view layer image in the main display buffer area and storing the auxiliary view layer image in the auxiliary display buffer area;
the step of taking out the display image from the display buffer area and sending the display image to the VR headset by using the selected driver plug-in to enable the VR headset to output an output picture corresponding to the display image includes:
and taking out the main visual angle layer image from the main display buffer area, taking out the auxiliary visual angle layer image from the auxiliary display buffer area, and sending the main visual angle layer image and the auxiliary visual angle layer image to the VR helmet by utilizing the selected driving plug-in so that the VR helmet outputs an output picture corresponding to the main visual angle layer image and the auxiliary visual angle layer image.
Optionally, the main view layer image includes first display information of the main view layer image, and the auxiliary view layer image includes second display information of the auxiliary view layer image, first depth information of the main view layer image, and second depth information of the auxiliary view layer; the step of sending the main view layer image and the auxiliary view layer image to the VR headset using the selected driver plug-in to enable the VR headset to output an output picture corresponding to the main view layer image and the auxiliary view layer image includes:
sending the main view layer image and the auxiliary view layer image to the VR headset by using the selected driving plug-in, so that the VR headset determines a first display posture of the main view layer image and a second display posture of the auxiliary view layer image by using the first display information and the second display information; adjusting the first display posture by using the first depth information and the inertial unit data to obtain an adjusted first display posture, and adjusting the second display posture by using the second depth information and the inertial unit data to obtain an adjusted second display posture; determining new first display information of the main visual angle layer image and new second display information of the auxiliary visual angle layer by using the adjusted first display posture and the adjusted second display posture; and obtaining an output picture based on the new first display information, the new second display information, the main visual angle layer image and the auxiliary visual angle layer image, and outputting the output picture.
Optionally, after the step of sending the display image to the VR headset using the selected driver insert, the method further comprises:
receiving a display setting operation for transmission of the output screen;
obtaining a setting instruction based on the display setting operation;
sending, by the selected driver insert, the setting instruction to the VR headset so that the VR headset configures the control endpoint with the setting instruction.
Optionally, after the step of sending the display image to the VR headset using the selected driver insert, the method further comprises:
when the virtual trigger operation is monitored, a virtual display screen is created;
loading an application program when the selection operation aiming at the application program is monitored;
mapping the loaded display interface of the application program to the virtual display screen;
updating the VR data with a display interface mapped to the virtual display screen;
and returning to execute the step of rendering the VR data by using the inertial unit data and the display parameters of the VR helmet to obtain a display image.
In addition, in order to achieve the above object, the present invention further provides a split VR head display system, which is characterized by comprising: VR helmet and mobile terminal, mobile terminal includes: the display method comprises the steps of a memory, a processor and a display program which is stored on the memory and runs an output picture on the processor, wherein the display program of the output picture realizes the display method of the output picture according to any one of the steps when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a mobile terminal, including: the display method comprises the steps of a memory, a processor and a display program which is stored on the memory and runs an output picture on the processor, wherein the display program of the output picture realizes the display method of the output picture according to any one of the steps when being executed by the processor.
Further, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon a display program of an output screen, which when executed by a processor, implements the steps of the method of displaying an output screen according to any one of the above.
The technical scheme of the invention provides a display method of an output picture, which is used for a mobile terminal and comprises the following steps: when VR data are obtained, rendering is carried out on the VR data by using inertial unit data and display parameters of a VR helmet, and a display image is obtained, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set; storing the display image in a display buffer, wherein the display buffer is configured according to the display parameters; and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected driving plug-in so as to enable the VR helmet to output an output picture corresponding to the display image.
In the existing method, a mobile terminal in a split type head display system is difficult to acquire data and display parameters of an inertial unit, so that the compatibility of the mobile terminal and a VR helmet is poor, and the final image output effect is extremely poor. By using the method, the mobile terminal can quickly and accurately acquire the data and the display parameters of the inertial unit by presetting the selected drive plug-ins corresponding to the VR helmet in a centralized manner, so that the compatibility between the mobile terminal and the VR helmet is better, and the final display effect of the output image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mobile terminal in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for displaying an output image according to the present invention;
FIG. 3 is a schematic structural diagram of a split VR head display system according to an embodiment of the present invention;
FIG. 4 is a block diagram of a display device for outputting a display image according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile terminal in a hardware operating environment according to an embodiment of the present invention.
The mobile terminal may be a mobile phone, a smart phone, etc.
Generally, a mobile terminal includes: at least one processor 301, a memory 302 and a display program of an output screen stored on the memory and executable on the processor, the display program of the output screen being configured to implement the steps of the display method of the output screen as described before.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. The processor 301 may further include an AI (Artificial Intelligence) processor for processing display method operations related to the output picture so that a display method model of the output picture can be self-trained and learned, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement a method of displaying an output screen provided by method embodiments herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The power supply 306 is used to power various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the mobile terminal and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, in which a display program of an output screen is stored, and the display program of the output screen, when executed by a processor, implements the steps of the display method of the output screen as described above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined that the program instructions may be deployed to be executed on one mobile terminal or on multiple mobile terminals located at one site or distributed across multiple sites and interconnected by a communication network, as examples.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The computer-readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Based on the above hardware structure, an embodiment of the display method of the output screen of the present invention is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a method for displaying an output screen according to the present invention, the method being applied to a mobile terminal, and the method including the following steps:
step S11: when VR data are obtained, rendering is carried out on the VR data by using inertial unit data and display parameters of a VR helmet, and a display image is obtained, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set.
Step S12: and storing the display image in a display buffer area, wherein the display buffer area is configured through the display parameters.
Step S13: and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected driving plug-in so as to enable the VR helmet to output an output picture corresponding to the display image.
The execution subject of the present invention is a mobile terminal, the mobile terminal is installed with a display program of an output screen, and the steps of the method for displaying an output screen according to the present invention are implemented when the mobile terminal executes the display program of an output screen. It can be understood that, in the present invention, the mobile terminal and the VR headset are connected to each other for information interaction, and the steps of the display method of the output screen of the present invention are implemented together.
The VR data is VR data which is acquired by the mobile terminal and used for displaying the VR helmet, and the VR data refers to Virtual Reality data. The inertial measurement unit of the VR helmet comprises a gravity acceleration subunit, a gyroscope subunit, a geomagnetic subunit, a temperature subunit, a time stamp subunit and the like. The display parameters refer to parameters of display endpoints of the VR helmet, the display endpoints refer to VR display screens, and the display parameters include resolution, refresh time stamps, synchronization information, refresh rate, periodic synchronization signals and the like of the VR helmet.
In addition, can integrate the drive plug-in components of different VR helmet releases according to the different producers of VR helmet, obtain predetermine the drive plug-in components set: the method comprises the steps of obtaining a plurality of different preset drive plug-ins (different manufacturers, products issued by the manufacturers and integrated drive plug-ins), determining manufacturer information and product type information corresponding to the different preset drive plug-ins, establishing a mapping relation between the manufacturer information and the product type information and the preset drive plug-ins, and obtaining a preset drive plug-in set according to the mapping relation, the manufacturer information, the product type information and the plurality of preset drive plug-ins.
The selected drive card is then determined as follows: when the VR helmet is monitored to be accessed, acquiring equipment information of the VR helmet; and determining a selected drive plug-in corresponding to the VR helmet in a preset drive plug-in set by using the device information.
In the specific application, the VR helmet is connected to the mobile terminal through the USB port, the equipment information of the VR helmet refers to manufacturer information of the VR helmet and product type information of the VR helmet, and the corresponding selected drive plug-in is determined in a preset drive plug-in set through the two kinds of information.
The method comprises the steps of obtaining inertial unit data and display parameters of the VR helmet by means of the selected driving plug-in, and creating a display buffer area in the mobile terminal by means of the display parameters (such as resolution, refresh rate and periodic synchronous signals of the VR helmet), wherein the display buffer area comprises a main display buffer area and an auxiliary display buffer area.
Meanwhile, a cross-process communication (IPC) server, such as a Unix local socket or a nameless pipe or a Binder, can be created for receiving VR data.
Further, before the step of obtaining the device information of the VR headset when the access of the VR headset is monitored, the method further includes: performing equipment description type definition operation on the VR helmet by using a preset equipment access specification; defining operation of a control endpoint, a display endpoint and an inertia measurement unit is carried out on the VR helmet; monitoring hot plug of the VR helmet by using the equipment description type of the VR helmet; the step of obtaining device information of the VR helmet when monitoring that the VR helmet is accessed includes: when the VR headset access is monitored, determining whether the VR headset matches preset object filtering rules, wherein the object filtering rules are obtained based on the equipment description type of the VR headset; when the VR headset is matched with the object filtering rule, acquiring device information of the VR headset.
The preset device access specification refers to a USB-IF specification, and is used for defining the device description type of the VR headset, so that the mobile terminal can open the VR device according to the device description type. Meanwhile, different end points, namely the display end point, the inertia measurement unit and the control end point, are required to be defined, wherein the control end points comprise brightness control, volume control, version information, serial number and the like of the VR helmet.
The VR headset comprises a display processing module, configured to receive the HDMI or DP display frame and the auxiliary display frame, and perform corresponding data processing, so as to display the processed frame picture on a screen, that is, perform an obtaining process of an output picture described below.
Monitoring the hot plug of the VR headset by using the device description type of the VR headset may refer to: the method comprises the following steps of adding USB device description types defined by a VR helmet into a system hot plug monitoring list for monitoring the plug events of the helmet, and realizing monitoring operation by adding USB Host characteristic support in Server manifest, wherein pseudo codes are as follows:
Figure BDA0003397695000000101
accordingly, for a VR headset, the set filtering conditions-object filtering rules are as follows:
Figure BDA0003397695000000102
when the object filtering rule is met, the VR headset is matched with the object filtering rule, and the VR headset is allowed to access, so that the related data (the various data) of the VR headset can be smoothly acquired.
Specifically, the display image includes a main viewing layer image and an auxiliary viewing layer image; the step of rendering the VR data by using inertial unit data and display parameters of the VR helmet to obtain a display image comprises the following steps: determining visual angle information by using the inertial unit data; obtaining a main view layer image by using the view information, the VR data and the configuration parameters of the main display buffer area; generating an auxiliary visual angle layer image by using the main visual angle layer image and the configuration parameters of the auxiliary buffer area; the step of storing the display image in a display buffer comprises: storing the main view layer image in the main display buffer area and storing the auxiliary view layer image in the auxiliary display buffer area; the step of taking out the display image from the display buffer area and sending the display image to the VR headset by using the selected driver plug-in to enable the VR headset to output an output picture corresponding to the display image includes: and taking out the main visual angle layer image from the main display buffer area, taking out the auxiliary visual angle layer image from the auxiliary display buffer area, and sending the main visual angle layer image and the auxiliary visual angle layer image to the VR helmet by utilizing the selected driving plug-in so that the VR helmet outputs an output picture corresponding to the main visual angle layer image and the auxiliary visual angle layer image.
The visual angle information refers to a visual angle corresponding to a user when the user wears the VR helmet and the helmet is in a certain position. Typically, multiple layers are created, where one layer is a primary viewing layer-primary viewing layer image, and the other layer is a secondary viewing layer image, which may include a background layer or an overlay layer.
The main visual angle layer image is obtained by projecting a scene through a camera according to visual angle information; the background layer may be a 360 degree panorama or a low resolution main view; the overlay layer may be a small rectangular area or a curved surface or the like. Different layers can be determined by the actual use scene, for example, the background layer can be a low-resolution main view angle, and the main view angle side is a higher-resolution image of the gazing point area; or the background layer may be a panorama, with a fixed video or window on the main viewing side.
In general, the primary view layer image includes first display information of the primary view layer image, and the auxiliary view layer image includes second display information of the auxiliary view layer image, first depth information of the primary view layer image, and second depth information of the auxiliary view layer. The first display information may include a display orientation and a display size of the main viewing layer image, and the second display may include a display orientation and a display size of the auxiliary viewing layer image.
In specific application, the interaction process of the VR helmet and the mobile terminal is realized by corresponding selected driving plug-ins: parameter acquisition, instruction transmission, and transmission of picture data (VR data and display data).
Further, the step of sending, by the selected driver plug-in, the main view layer image and the auxiliary view layer image to the VR headset so that the VR headset outputs an output picture corresponding to the main view layer image and the auxiliary view layer image includes: sending the main view layer image and the auxiliary view layer image to the VR headset by using the selected driving plug-in, so that the VR headset determines a first display posture of the main view layer image and a second display posture of the auxiliary view layer image by using the first display information and the second display information; adjusting the first display posture by using the first depth information and the inertial unit data to obtain an adjusted first display posture, and adjusting the second display posture by using the second depth information and the inertial unit data to obtain an adjusted second display posture; determining new first display information of the main visual angle layer image and new second display information of the auxiliary visual angle layer by using the adjusted first display posture and the adjusted second display posture; and obtaining an output picture based on the new first display information, the new second display information, the main visual angle layer image and the auxiliary visual angle layer image, and outputting the output picture.
After the VR helmet receives the main visual layer image and the auxiliary visual layer image, the display processing module is utilized to extract first display information and second display information from the main visual layer image and the auxiliary visual layer image, and then the initial display postures (the display postures processed by the mobile terminal but not corresponding to the helmet posture corresponding to the current position of a user and possibly poor display effect if the main visual layer image and the auxiliary visual layer image are directly displayed) of the main visual layer image and the auxiliary visual layer image are calculated by utilizing the first display information and the second display information.
Then, the display posture of the main visual angle layer image is adjusted by utilizing specific inertial unit data and first depth information (the depth information of the main visual angle layer image), and the adjusted first display posture is obtained; and the display posture of the auxiliary visual angle layer image is adjusted by utilizing the specific inertial unit data and the second depth information (the depth information of the auxiliary visual angle layer image), so that the adjusted second display posture is obtained. The adjusted first display posture and the adjusted second display posture can correspond to the current posture of the helmet, and the display effect is better. And then, calculating the final display position and size of the main visual angle layer image and the auxiliary visual angle layer image according to the adjusted first display posture and the adjusted second display posture, and then integrating the main visual angle layer image and the auxiliary visual angle layer image to obtain a final output picture.
Further, after the step of sending the display image to the VR headset using the selected driver insert, the method further comprises: receiving a display setting operation for transmission of the output screen; obtaining a setting instruction based on the display setting operation; sending, by the selected driver insert, the setting instruction to the VR headset so that the VR headset configures the control endpoint with the setting instruction.
When a user watches an output picture by using the VR helmet, the sound, the brightness and the like corresponding to the output picture can be set, namely, the display setting operation is sent, the mobile terminal obtains a setting instruction based on the display setting operation, and the setting instruction is to configure a control endpoint of the VR helmet according to the user requirement, so that the sound, the brightness and the like of the VR helmet are adjusted to be in a state required by the user.
Further, after the step of sending the display image to the VR headset using the selected driver insert, the method further comprises: when the virtual trigger operation is monitored, a virtual display screen is created; loading an application program when the selection operation aiming at the application program is monitored; mapping the loaded display interface of the application program to the virtual display screen; updating the VR data with a display interface mapped to the virtual display screen; and returning to execute the step of rendering the VR data by using the inertial unit data and the display parameters of the VR helmet to obtain a display image.
In this embodiment, the VR headset may display native applications (non-VR data) of the mobile terminal. When the virtual trigger operation is monitored, it is indicated that the user wants to display the non-VR data, at this time, a virtual application (an application program for performing VR display on the non-VR data, namely, a display program of a picture of the application) is started, and process resources such as process context, a display buffer area, authority and the like are applied; then, the virtual application creates a virtual display screen, and when the selection operation aiming at the application program is monitored, namely, a specific mobile phone application is loaded, the loaded display interface of the application program is mapped to the virtual display screen; the VR data is then updated with the display interface mapped to the virtual display screen. And finally, processing the VR data corresponding to the display interface according to the specific method so that the VR helmet outputs a corresponding output picture.
The technical scheme of the invention provides a display method of an output picture, which is used for a mobile terminal and comprises the following steps: when VR data are obtained, rendering is carried out on the VR data by using inertial unit data and display parameters of a VR helmet, and a display image is obtained, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set; storing the display image in a display buffer, wherein the display buffer is configured according to the display parameters; and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected driving plug-in so as to enable the VR helmet to output an output picture corresponding to the display image.
In the existing method, a mobile terminal in a split type head display system is difficult to acquire data and display parameters of an inertial unit, so that the compatibility of the mobile terminal and a VR helmet is poor, and the final image output effect is extremely poor. By using the method, the mobile terminal can quickly and accurately acquire the data and the display parameters of the inertial unit by presetting the selected drive plug-ins corresponding to the VR helmet in a centralized manner, so that the compatibility between the mobile terminal and the VR helmet is better, and the final display effect of the output image is improved.
The scheme avoids modifying the mobile phone operating system by processing the display related problems such as extended display and Vertical Synchronization (VSYNC) at the service end, thereby ensuring that the scheme has more universality.
According to the scheme, the plurality of layers can be transmitted to the head display end through extended display, functions of different layers can be used according to actual scenes, and the layers are finally synthesized by the display end, so that the overall time delay of display is reduced, and the display effect is improved.
According to the scheme, native applications of the mobile phone can be loaded to the virtual container through the virtual application, and then the native applications are displayed to the VR equipment through the container, so that the compatibility of the mobile phone applications is realized.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a split VR head display system according to an embodiment of the present invention, where the system includes: VR helmet and mobile terminal, mobile terminal includes: the display method comprises the steps of storing an output picture, storing the output picture in a memory, storing the output picture in the memory, and running an output picture display program on the processor, wherein the output picture display program realizes the steps of the output picture display method when being executed by the processor.
Specifically, the VR helmet is connected with the mobile terminal through a USB serial port.
Referring to fig. 4, fig. 4 is a block diagram illustrating a first embodiment of a display apparatus for outputting a picture according to the present invention, which is used for a mobile terminal, and which includes:
the rendering module 10 is configured to render VR data by using inertial unit data and display parameters of a VR helmet when the VR data is obtained, so as to obtain a display image, where the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in, and the selected driving plug-in is a driving plug-in a preset driving plug-in set corresponding to the VR helmet;
a storage module 20, configured to store the display image in a display buffer, where the display buffer is configured according to the display parameter;
a sending module 30, configured to take out the display image from the display buffer, and send the display image to the VR headset by using the selected driver plug-in, so that the VR headset outputs an output picture corresponding to the display image.
It should be noted that, since the steps executed by the apparatus of this embodiment are the same as the steps of the foregoing method embodiment, the specific implementation and the achievable technical effects thereof can refer to the foregoing embodiment, and are not described herein again.
The above description is only an alternative embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A display method of an output screen, for a mobile terminal, the method comprising:
when VR data are obtained, rendering is carried out on the VR data by using inertial unit data and display parameters of a VR helmet, and a display image is obtained, wherein the inertial unit data and the display parameters are obtained from the VR helmet by using a selected driving plug-in which is a driving plug-in corresponding to the VR helmet in a preset driving plug-in set;
storing the display image in a display buffer, wherein the display buffer is configured according to the display parameters;
and taking out the display image from the display buffer area, and sending the display image to the VR helmet by using the selected driving plug-in so as to enable the VR helmet to output an output picture corresponding to the display image.
2. The method of claim 1, wherein prior to the step of obtaining the display image by rendering the VR data using inertial unit data and display parameters of a VR headset when acquiring the VR data, the method further comprises:
when the VR helmet is monitored to be accessed, acquiring equipment information of the VR helmet;
and determining a selected drive plug-in corresponding to the VR helmet in a preset drive plug-in set by using the device information.
3. The method of claim 1, wherein prior to the step of obtaining device information for the VR headset when access to the VR headset is monitored, the method further comprises:
performing equipment description type definition operation on the VR helmet by using a preset equipment access specification;
defining operation of a control endpoint, a display endpoint and an inertia measurement unit is carried out on the VR helmet;
monitoring hot plug of the VR helmet by using the equipment description type of the VR helmet;
the step of obtaining device information of the VR helmet when monitoring that the VR helmet is accessed includes:
when the VR headset access is monitored, determining whether the VR headset matches preset object filtering rules, wherein the object filtering rules are obtained based on the equipment description type of the VR headset;
when the VR headset is matched with the object filtering rule, acquiring device information of the VR headset.
4. The method of claim 3, wherein the display buffer comprises a main display buffer and an auxiliary display buffer, the display images comprising a main view layer image and an auxiliary view layer image; the step of rendering the VR data by using inertial unit data and display parameters of the VR helmet to obtain a display image comprises the following steps:
determining visual angle information by using the inertial unit data;
obtaining a main view layer image by using the view information, the VR data and the configuration parameters of the main display buffer area;
generating an auxiliary visual angle layer image by using the main visual angle layer image and the configuration parameters of the auxiliary buffer area;
the step of storing the display image in a display buffer comprises:
storing the main view layer image in the main display buffer area and storing the auxiliary view layer image in the auxiliary display buffer area;
the step of taking out the display image from the display buffer area and sending the display image to the VR headset by using the selected driver plug-in to enable the VR headset to output an output picture corresponding to the display image includes:
and taking out the main visual angle layer image from the main display buffer area, taking out the auxiliary visual angle layer image from the auxiliary display buffer area, and sending the main visual angle layer image and the auxiliary visual angle layer image to the VR helmet by utilizing the selected driving plug-in so that the VR helmet outputs an output picture corresponding to the main visual angle layer image and the auxiliary visual angle layer image.
5. The method of claim 4, wherein the primary view layer picture comprises first display information of the primary view layer picture, and the secondary view layer picture comprises second display information of the secondary view layer picture, first depth information of the primary view layer picture, and second depth information of the secondary view layer; the step of sending the main view layer image and the auxiliary view layer image to the VR headset using the selected driver plug-in to enable the VR headset to output an output picture corresponding to the main view layer image and the auxiliary view layer image includes:
sending the main view layer image and the auxiliary view layer image to the VR headset by using the selected driving plug-in, so that the VR headset determines a first display posture of the main view layer image and a second display posture of the auxiliary view layer image by using the first display information and the second display information; adjusting the first display posture by using the first depth information and the inertial unit data to obtain an adjusted first display posture, and adjusting the second display posture by using the second depth information and the inertial unit data to obtain an adjusted second display posture; determining new first display information of the main visual angle layer image and new second display information of the auxiliary visual angle layer by using the adjusted first display posture and the adjusted second display posture; and obtaining an output picture based on the new first display information, the new second display information, the main visual angle layer image and the auxiliary visual angle layer image, and outputting the output picture.
6. The method of claim 1, wherein after the step of sending the display image to the VR headset with the selected driver insert, the method further comprises:
receiving a display setting operation for transmission of the output screen;
obtaining a setting instruction based on the display setting operation;
sending, by the selected driver insert, the setting instruction to the VR headset so that the VR headset configures the control endpoint with the setting instruction.
7. The method of claim 1, wherein after the step of sending the display image to the VR headset with the selected driver insert, the method further comprises:
when the virtual trigger operation is monitored, a virtual display screen is created;
loading an application program when the selection operation aiming at the application program is monitored;
mapping the loaded display interface of the application program to the virtual display screen;
updating the VR data with a display interface mapped to the virtual display screen;
and returning to execute the step of rendering the VR data by using the inertial unit data and the display parameters of the VR helmet to obtain a display image.
8. A split VR head display system, the system comprising: VR helmet and mobile terminal, mobile terminal includes: a memory, a processor and a display program stored on the memory and running an output screen on the processor, the display program of the output screen when executed by the processor implementing the steps of the method of displaying an output screen according to any one of claims 1 to 7.
9. A mobile terminal, characterized in that the mobile terminal comprises: a memory, a processor and a display program stored on the memory and running an output screen on the processor, the display program of the output screen when executed by the processor implementing the steps of the method of displaying an output screen according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a display program of an output screen is stored on the computer-readable storage medium, which when executed by a processor implements the steps of the display method of an output screen according to any one of claims 1 to 7.
CN202111496648.9A 2021-12-07 2021-12-07 Display method and system of output picture, mobile terminal and storage medium Active CN114168096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111496648.9A CN114168096B (en) 2021-12-07 2021-12-07 Display method and system of output picture, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111496648.9A CN114168096B (en) 2021-12-07 2021-12-07 Display method and system of output picture, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114168096A true CN114168096A (en) 2022-03-11
CN114168096B CN114168096B (en) 2023-07-25

Family

ID=80484717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111496648.9A Active CN114168096B (en) 2021-12-07 2021-12-07 Display method and system of output picture, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114168096B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106998409A (en) * 2017-03-21 2017-08-01 华为技术有限公司 A kind of image processing method, head-mounted display and rendering apparatus
CN110488977A (en) * 2019-08-21 2019-11-22 京东方科技集团股份有限公司 Virtual reality display methods, device, system and storage medium
CN111708431A (en) * 2020-05-12 2020-09-25 青岛小鸟看看科技有限公司 Human-computer interaction method and device, head-mounted display equipment and storage medium
CN112039899A (en) * 2020-09-01 2020-12-04 深圳创维数字技术有限公司 Virtual reality system control method, system and storage medium
CN112799508A (en) * 2021-01-18 2021-05-14 Oppo广东移动通信有限公司 Display method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106998409A (en) * 2017-03-21 2017-08-01 华为技术有限公司 A kind of image processing method, head-mounted display and rendering apparatus
CN110488977A (en) * 2019-08-21 2019-11-22 京东方科技集团股份有限公司 Virtual reality display methods, device, system and storage medium
CN111708431A (en) * 2020-05-12 2020-09-25 青岛小鸟看看科技有限公司 Human-computer interaction method and device, head-mounted display equipment and storage medium
CN112039899A (en) * 2020-09-01 2020-12-04 深圳创维数字技术有限公司 Virtual reality system control method, system and storage medium
CN112799508A (en) * 2021-01-18 2021-05-14 Oppo广东移动通信有限公司 Display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114168096B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112004086B (en) Video data processing method and device
WO2018171429A1 (en) Image stitching method, device, terminal, and storage medium
CN109191549B (en) Method and device for displaying animation
CN107911708B (en) Barrage display method, live broadcast method and related devices
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN110368689B (en) Game interface display method, system, electronic equipment and storage medium
CN110297917B (en) Live broadcast method and device, electronic equipment and storage medium
CN108762881B (en) Interface drawing method and device, terminal and storage medium
CN109829864B (en) Image processing method, device, equipment and storage medium
CN112312226B (en) Wheat connecting method, system, device, electronic equipment and storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN107896337B (en) Information popularization method and device and storage medium
EP3989113A1 (en) Facial image transmission method, numerical value transfer method and apparatus, and electronic device
CN109726064B (en) Method, device and system for simulating abnormal operation of client and storage medium
CN110740340A (en) Video live broadcast method and device and storage medium
US11393418B2 (en) Method, device and system for data transmission, and display device
CN110178111B (en) Image processing method and device for terminal
CN111669640B (en) Virtual article transfer special effect display method, device, terminal and storage medium
CN111510757A (en) Method, device and system for sharing media data stream
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN111083554A (en) Method and device for displaying live gift
CN113038273B (en) Video frame processing method and device, storage medium and electronic equipment
CN108492339B (en) Method and device for acquiring resource compression packet, electronic equipment and storage medium
CN110971840B (en) Video mapping method and device, computer equipment and storage medium
CN108038232B (en) Webpage editing method, device and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant