CN113467602B - VR display method and system - Google Patents

VR display method and system Download PDF

Info

Publication number
CN113467602B
CN113467602B CN202010245436.2A CN202010245436A CN113467602B CN 113467602 B CN113467602 B CN 113467602B CN 202010245436 A CN202010245436 A CN 202010245436A CN 113467602 B CN113467602 B CN 113467602B
Authority
CN
China
Prior art keywords
pixel point
display
content
display content
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010245436.2A
Other languages
Chinese (zh)
Other versions
CN113467602A (en
Inventor
朱海琦
徐林忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010245436.2A priority Critical patent/CN113467602B/en
Publication of CN113467602A publication Critical patent/CN113467602A/en
Application granted granted Critical
Publication of CN113467602B publication Critical patent/CN113467602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels

Abstract

The embodiment of the invention relates to the technical field of terminals and discloses a VR display method and a VR display system. The method comprises the following steps: the method comprises the steps that a second device obtains first display content on a first screen of a first device, wherein the first display content comprises parameter information of a plurality of pixel points; and the second device determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second device. Through the mode, the embodiment of the invention can enable other people except the VR glasses to watch the display content of the VR glasses.

Description

VR display method and system
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a VR display method and a VR display system.
Background
Virtual Reality (VR) glasses are a type of glasses that guide a user to be completely immersed in the Virtual environment by blocking the human's vision and hearing from the outside using a glasses display, wherein the display contents in the Virtual environment are changed with the change of the viewing angle of the human. With the continued development of electronics, virtual reality glasses have allowed users to operate controls in virtual pictures through other peripheral devices (e.g., handles) according to their point of view and location in the virtual environment.
Currently, the display content of the VR device is generally only displayed to the wearer of the VR device for viewing, and other people except the wearer cannot view the display content of the VR glasses.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a VR display method and system, which enable other people except for a wearer of VR devices to view display contents of VR devices.
According to a first aspect of an embodiment of the present invention, there is provided a VR display method, including: the method comprises the steps that a second device obtains first display content on a first screen of a first device, wherein the first display content comprises parameter information of a plurality of pixel points; and the second device determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second device.
In an optional manner, the acquiring the first display content on the first screen of the first device specifically includes: acquiring original content corresponding to the first display content; determining a first starting pixel point and a first maximum pixel point in the first screen; acquiring a first original pixel point corresponding to the first initial pixel point in the original content, and acquiring a second original pixel point corresponding to the first maximum pixel point; determining a display area in the original content according to the first original pixel point and the second original pixel point; and acquiring parameter information of each pixel point in the display area, thereby acquiring the first display content.
In an optional manner, the parameter information of the pixel point includes a pixel identifier, a bias angle and a color parameter; the obtaining the parameter information of each pixel point in the display area specifically includes: determining the pixel identification of the pixel point; determining the inclined angle of the pixel point according to eyeball focus information of the first device; and determining the color parameters of the pixel points according to the original content.
In an optional manner, the determining the second display content according to the parameter information of the pixel point specifically includes: determining a second initial pixel point and a second maximum pixel point in the second screen; determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point; determining a rotation angle according to the inclined deviation angle of the pixel point; and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as the second display content.
In an optional manner, the determining the rotation angle according to the oblique deviation angle of the pixel point specifically includes: the rotation angle is calculated according to the following formula: α=θ [ (u) 2 -u 1 )(v 2 -v 1 )]/[(x 2 -x 1 )(y 2 -y 1 )]
Wherein, (x) 1 ,y 1 ) For the coordinates of the first original pixel point, (x) 2 ,y 2 ) And (u) is the coordinates of the second original pixel point 1 ,v 1 ) Is the coordinates of the second starting pixel point, (u) 2 ,v 2 ) And θ is the oblique angle, which is the coordinate of the second maximum pixel point.
In an alternative, the method further comprises: the second device receives a transformation control instruction for an object in the second display content; and the second equipment performs transformation processing on the objects in the second display content according to the transformation control instruction so as to update the second display content.
In an alternative, the method further comprises: the second device determines parameter information of the updated pixel point according to the updated second display content; and the second device sends the parameter information of the updated pixel point to the first device so that the first device updates the first display content according to the parameter information of the updated pixel point.
In an alternative manner, the transformation control instruction includes one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction, and a color change control instruction.
According to a second aspect of an embodiment of the present invention, there is provided a VR display system including: a first device and a second device, the first device being connected to the second device; the second device is configured to: acquiring first display content on a first screen of the first device, wherein the first display content comprises parameter information of a plurality of pixel points; and determining second display content according to the parameter information of the pixel point, and displaying the second display content on a second screen of the second device.
According to a third aspect of an embodiment of the present invention, there is provided a VR display apparatus including: the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first display content on a first screen of first equipment, and the first display content comprises parameter information of a plurality of pixel points; and the display module is used for determining second display content according to the parameter information of the pixel point and displaying the second display content on a second screen of the second equipment.
According to a fourth aspect of embodiments of the present invention, there is provided a computing device comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to execute the VR display method described above.
According to a fifth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein at least one executable instruction for causing a computing device to perform the VR display method described above.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixels, the second display content is determined according to the parameter information of the pixels, and is displayed on the second screen of the second device, so that a user of the second device can watch the content watched by the user of the first device, other people except for a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a schematic structural diagram of a VR display system according to an embodiment of the present invention;
fig. 2 is a flow chart illustrating a VR display method according to an embodiment of the present invention;
fig. 3 is a flow chart illustrating a VR display method according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a VR display device according to an embodiment of the present invention;
FIG. 5 illustrates a schematic diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
Currently, the display content of the VR device is generally only displayed to the wearer of the VR device for viewing, but other people except the wearer cannot view the display content of the VR device, and the other people cannot effectively communicate with the wearer for the content watched by the wearer.
Based on the above, the embodiment of the invention provides a VR display method and a VR display system, which can enable other people except for a wearer of VR equipment to watch display content of the VR equipment.
In particular, embodiments of the present invention are further described below with reference to the accompanying drawings.
It should be understood, however, that the following examples provided herein may be combined with one another to form new embodiments, so long as they do not conflict.
Fig. 1 shows a schematic structural diagram of a VR display system according to an embodiment of the present invention. As shown in fig. 1, the VR display system 100 includes: a first device 110 and a second device 120.
Wherein the first device 110 and the second device 120 are capable of communication, for example, by a connection wire or by interconnection based on a communication network. The communication network may be a local area network or a wide area network switched through the relay (relay) device. When the communication network is a local area network, the communication network may be a wifi hotspot network, a wifi P2P network, a bluetooth network, a zigbee network, or a near field communication (near field communication, NFC) network. When the communication network is a wide area network, the communication network may be a third generation mobile communication technology (3 rd-generation wireless telephone technology, 3G) network, a fourth generation mobile communication technology (the 4) th generation mobile communication technology, 4G) network, fifth generation mobile communication technology (5) th Generation mobile communication technology, 5G) network, future evolved public land mobile network (public land mobile network, PLMN) or the internet, etc.
The first device 110 is a virtual reality display device, such as a head-mounted virtual reality device (Head Mount Display), which is also called a virtual reality helmet, VR glasses, or glasses-type display.
Specifically, the first device 110 includes a first screen 111. The first screen 111 includes a first display screen for displaying images, videos, etc. so that there is an immersive experience when the user wears the first device 110 and views content on the first display screen through the first screen 111. The first display screen may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode or active-matrix organic light-emitting diode (AMOLED), a flex light-emitting diode (FLED), a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the first screen 111 may include one or more first display screens.
The second device 120 may be a virtual reality display device, or may be another electronic device. For example, the other electronic device may be a personal computer (Personal Computer, PC), a Personal Digital Assistant (PDA), a tablet computer, a notebook computer, a car computer (carpenter), a palm game console, smart glasses, a smart watch, a drone, or the like.
Specifically, the second device 120 includes a second screen 121. The second screen 121 includes a second display screen for displaying images, videos, and the like. The second display screen may employ a liquid crystal display screen (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode or active-matrix organic light-emitting diode (AMOLED), a flex light-emitting diode (FLED), a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the second screen 121 may include one or more second display screens.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the first device 110 and the second device 120. In other embodiments of the present application, the first device 110 or the second device 120 may include more or fewer components than shown, or may combine certain components, or may split certain components, or may be arranged in different components. The illustrated components may be implemented in hardware, software, or a combination of hardware and software.
In the present embodiment, the first device 110 is configured to determine a first display content on a first screen 111 of the first device 110; the first device 110 is further configured to send the first display content to the second device 120; the second device 120 is configured to obtain a first display content on a first screen of the first device 110, where the first display content includes parameter information of a plurality of pixels; the second device 120 is further configured to determine a second display content according to the parameter information of the pixel point, and display the second display content on the second screen 121 of the second device 120. By the mode, the embodiment of the invention can enable other people except the wearer of the VR device to watch the display content of the VR device.
It should be noted that, the first device 110 may be connected to one or more second devices 120 at the same time, and both the first device 110 and each second device 120 may implement the VR display method provided by the embodiments of the present invention.
Fig. 2 is a flow chart illustrating a VR display method according to an embodiment of the present invention. The method may be applied to VR display system 100 in fig. 1. As shown in fig. 2, the method includes:
step 210, the second device obtains a first display content on a first screen of the first device, where the first display content includes parameter information of a plurality of pixels.
Wherein the first display content on the first screen refers to the entire content currently displayed on the first screen. When the first screen comprises a plurality of first display screens, the first display content is the content on all the first display screens. The first device may send the first display content to the second device after the first device establishes a connection with the second device.
Of course, the second device needs to be connected to the first device before the second device obtains the first display content on the first screen of the first device. The first device may store a connectable device identifier list in advance, so that the first device may determine whether to accept connection of the second device by looking up a device identifier in the device identifier list.
Step 210 specifically includes:
step 211, obtaining original content corresponding to the first display content.
The original content corresponding to the first display content refers to a content resource from which the first device can obtain the first display content after processing, that is, the original content is unprocessed. The method further comprises: the first device acquires the original content, acquires the user gesture, processes the original content according to the user gesture to obtain first display content, and sends the first display content to the second device. The original content may be sent to the first device by another device (such as the second device), and may also be generated by the first device according to an image captured by the camera of the first device.
Step 212, determining a first starting pixel point and a first maximum pixel point in the first screen.
The first initial pixel point and the first maximum pixel point refer to two vertex angle points in the first screen. For example, when the first screen includes two first display screens and the first display screens are rectangular, the two display screens respectively display left and right eye images, the first starting pixel point may be an upper left corner point of the first left display screen, the first maximum pixel point may be a lower right corner point of the first left display screen, or the first maximum pixel point may be a lower right corner point of the first right display screen; for another example, when the first screen includes two first display screens and the first display screens are circular, the first starting pixel point and the first maximum pixel point may be two vertices on a diameter of one of the first display screens. In this step, the determining the first starting pixel point and the first maximum pixel point in the first screen may specifically be: and acquiring a first screen coordinate system, and determining the coordinates of a first starting pixel point and a first maximum pixel point in the first screen coordinate system. For example, in the first screen coordinate system, the coordinates of the first start pixel point and the first maximum pixel point are determined to be (m 1 ,n 1 )、(m 2 ,n 2 ). For another example, in the first screen coordinate system, the coordinates of the first starting pixel point and the first maximum pixel point are determined to be (0, 0), (m 2 ,n 2 )。
Step 213, in the original content, a first original pixel corresponding to the first initial pixel is obtained, and a second original pixel corresponding to the first maximum pixel is obtained.
The first display content displayed in the first screen is converted from part or all of the original content, so that the pixel point corresponding to the first display content can be found in the original content. Thus, in this step, in the original content, a first original pixel corresponding to a first starting pixel is acquired, and a second original pixel corresponding to a first maximum pixel is acquiredThe pixel point may specifically be: and in the original content coordinate system, determining the coordinates of the first original pixel point and the second original pixel point according to the coordinates of the first original pixel point and the first maximum pixel point in the first screen coordinate system. For example, in the original content coordinate system, the first original pixel point and the second original pixel point are determined as (x) 1 ,y 1 )、(x 2 ,y 2 )。
Step 214, determining a display area in the original content according to the first original pixel point and the second original pixel point.
Wherein the shape of the first screen can be determined according to the first starting pixel point and the first maximum pixel point, then in this step, a display area similar to the shape of the first screen can be determined in the original content according to the coordinates of the first original pixel point and the second original pixel point.
And 215, acquiring parameter information of each pixel point in the display area, thereby acquiring the first display content.
The parameter information of the pixel point comprises a pixel identification, a bias angle and a color parameter. The pixel identification can be the coordinates of the pixel point in the original content coordinate system; the oblique angle can be an offset angle between the pixel point and the center of the original content view field; the color parameter may be an RGB value of the pixel point. Step 215 may specifically include:
step 2151, determining a pixel identifier of the pixel point;
step 2152, determining an oblique angle of the pixel point according to the eyeball focus information of the first device;
step 2153, determining color parameters of the pixel points according to the original content.
The pixel identification (x, y) of the pixel point can be determined according to the coordinates of the pixel point in the original content coordinate system. The eyeball focus information of the first device can be obtained by acquiring sensor information arranged on the first device, and then according to the eyeball focus information of the first device, the offset angle between the pixel point and the center of the original content visual field, namely the oblique offset angle of the pixel point, can be determined. For example, if the center coordinate of the original content field is (0, 0) and the pixel point is (x, y), the skew angle of the pixel point is θ=arctan (y/x). And determining color parameters (R, G, B) of the pixel points according to the RGB values of the pixel points in the original content.
And 220, the second device determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second device.
Wherein the second display content refers to content that needs to be displayed on the second screen of the second device. When the second screen includes a plurality of second display screens, the contents displayed on the respective second display screens may be identical.
Step 220 may specifically include:
step 221, determining a second initial pixel point and a second maximum pixel point in the second screen.
The second initial pixel point and the second maximum pixel point refer to two vertex angle points in the second screen. Step 221 may specifically be: and determining a second screen coordinate system, and determining the coordinates of a second initial pixel point and a second maximum pixel point in the second screen coordinate system.
Step 222, determining the intermediate content according to the pixel identifier of the pixel, the color parameter of the pixel, the first original pixel, the second initial pixel and the second maximum pixel.
The intermediate content refers to data in the process of converting the first display content into the second display content. In this step, the specific steps may be: determining a proportion relation between the first screen and the second screen according to the coordinates of the first original pixel point, the second initial pixel point and the second maximum pixel point, and determining a pixel identification of the pixel point of the middle content according to the pixel identification of the pixel point of the first display content and the proportion relation between the first screen and the second screen; and determining color parameters of the pixel points of the intermediate content according to the corresponding relation between the pixel points of the first display content and the pixel points of the intermediate content. For example, assume that the coordinates of the first original pixel point, the second start pixel point, and the second maximum pixel point are (x 1 ,y 1 )、(x 2 ,y 2 )、(u 1 ,v 1 )、(u 2 ,v 2 ) When the pixel identification of the pixel point of one first display content is (x, y), and the pixel identification of the pixel point of the intermediate content corresponding to (x, y) is (u, v), u= (u) 2 -u 1 )*x/(x 2 -x 1 ),v=(v 2 -v 1 )*y/(y 2 -y 1 )。
Step 223, determining a rotation angle according to the oblique angle of the pixel point.
The rotation angle is calculated according to the following formula:
α=θ*[(u 2 -u 1 )(v 2 -v 1 )]/[(x 2 -x 1 )(y 2 -y 1 )]
wherein, (x) 1 ,y 1 ) For the coordinates of the first original pixel point, (x) 2 ,y 2 ) And (u) is the coordinates of the second original pixel point 1 ,v 1 ) Is the coordinates of the second starting pixel point, (u) 2 ,v 2 ) And θ is the oblique angle, which is the coordinate of the second maximum pixel point.
And 224, rotating the intermediate content according to the rotation angle, and determining the rotated intermediate content as second display content.
And after the intermediate content is displayed according to the pixel identifiers and the color parameters of each pixel point of the intermediate content, rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as a second display content.
In some other embodiments, the rotation angle of each pixel point in the intermediate content may also be determined, and each pixel point in the intermediate content is displayed according to the corresponding rotation angle, so as to generate the second display content.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixels, the second display content is determined according to the parameter information of the pixels, and is displayed on the second screen of the second device, so that a user of the second device can watch the content watched by the user of the first device, other people except for a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
Fig. 3 is a flowchart illustrating a VR display method according to another embodiment of the present invention. The method may be applied to VR display system 100 in fig. 1. Unlike the above embodiment, as shown in fig. 3, the method further includes:
step 310, the second device receives a transformation control instruction for an object in the second display content.
Wherein the user may trigger a transformation control instruction on the second device for an object in the second display content. The transformation control instruction may include one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction, and a color change control instruction, so as to perform operations of moving, rotating, enlarging, reducing, changing, and the like on the object in the second display content.
And 320, the second device performs transformation processing on the object in the second display content according to the transformation control instruction so as to update the second display content.
When the transformation control instruction is received, the second device performs corresponding transformation processing on the object in the second display content according to the transformation control instruction, so as to update the second display content. For example, assuming that the transformation control instruction is a movement control instruction, coordinates representing each pixel point of the object are transformed to move the object, and after the object is moved, the object rotation angle may be adaptively changed according to the current position of the object to update the second display content.
And 330, the second device determines parameter information of the updated pixel point according to the updated second display content.
The parameter information of the updated pixel point may include an updated pixel identifier, an updated bias angle, and an updated color parameter. The second device may obtain parameter information of each updated pixel according to the updated second display content.
Step 340, the second device sends the updated parameter information of the pixel point to the first device, so that the first device updates the first display content according to the updated parameter information of the pixel point.
The first device updates the first display content according to the updated parameter information, which is the reverse process of step 220, and will not be described herein.
According to the embodiment of the invention, the second device receives the transformation control instruction for the object in the second display content, the object in the second display content is transformed according to the transformation control instruction so as to update the second display content, the parameter information of the updated pixel point is determined according to the updated second display content, the parameter information of the updated pixel point is sent to the first device, so that the first device updates the first display content according to the parameter information of the updated pixel point, a user of the second device can watch the content watched by the user of the first device, other people except the wearer of the VR device can watch the display content of the VR device, the display content of the second device and the display content of the first device can keep the same visual angle, and the user of the second device can control the object in the content watched by the user of the first device, so that other people except the wearer of the VR device can interact with the wearer of the VR device.
Fig. 4 shows a schematic structural diagram of a VR display device according to an embodiment of the present invention. The apparatus may be applied to a second device. As shown in fig. 4, the apparatus 400 includes: an acquisition module 410 and a display module 420.
The obtaining module 410 is configured to obtain a first display content on a first screen of a first device, where the first display content includes parameter information of a plurality of pixels; the display module 420 is configured to determine a second display content according to the parameter information of the pixel point, and display the second display content on a second screen of the second device.
In an alternative manner, the obtaining module 410 is specifically configured to: acquiring original content corresponding to the first display content; determining a first starting pixel point and a first maximum pixel point in the first screen; acquiring a first original pixel point corresponding to the first initial pixel point in the original content, and acquiring a second original pixel point corresponding to the first maximum pixel point; determining a display area in the original content according to the first original pixel point and the second original pixel point; and acquiring parameter information of each pixel point in the display area, thereby acquiring the first display content.
In an optional manner, the parameter information of the pixel point includes a pixel identifier, a bias angle and a color parameter; the acquisition module 410 is specifically further configured to: determining the pixel identification of the pixel point; determining the inclined angle of the pixel point according to eyeball focus information of the first device; and determining the color parameters of the pixel points according to the original content.
In an alternative manner, the display module 420 is specifically configured to: determining a second initial pixel point and a second maximum pixel point in the second screen; determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point; determining a rotation angle according to the inclined deviation angle of the pixel point; and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as the second display content.
In an alternative manner, the display module 420 is specifically configured to: the rotation angle is calculated according to the following formula:
α=θ*[(u 2 -u 1 )(v 2 -v 1 )]/[(x 2 -x 1 )(y 2 -y 1 )]
wherein, (x) 1 ,y 1 ) For the coordinates of the first original pixel point, (x) 2 ,y 2 ) And (u) is the coordinates of the second original pixel point 1 ,v 1 ) Is the coordinates of the second starting pixel point, (u) 2 ,v 2 ) And θ is the oblique angle, which is the coordinate of the second maximum pixel point.
In an alternative, the apparatus 400 further comprises a receiving module and a control module. The receiving module is used for receiving a transformation control instruction aiming at an object in the second display content; and the control module is used for carrying out transformation processing on the objects in the second display content according to the transformation control instruction so as to update the second display content.
In an alternative manner, the apparatus 400 further includes a parameter determination module and a transmission module. The parameter determining module is used for determining parameter information of the updated pixel point according to the updated second display content; the sending module is used for sending the parameter information of the updated pixel point to the first device so that the first device updates the first display content according to the parameter information of the updated pixel point.
In an alternative manner, the transformation control instruction includes one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction, and a color change control instruction.
It should be noted that, in the embodiments of the present invention, the VR display device is a device capable of executing the VR display method, and all embodiments of the VR display method are applicable to the device, and the same or similar beneficial effects can be achieved.
According to the embodiment of the invention, the first display content on the first screen of the first device is obtained, wherein the first display content comprises the parameter information of a plurality of pixels, the second display content is determined according to the parameter information of the pixels, and the second display content is displayed on the second screen of the second device, so that a user of the second device can watch the content watched by the user of the first device, other people except the wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
FIG. 5 illustrates a schematic diagram of a computing device according to an embodiment of the present invention, and the embodiment of the present invention is not limited to a specific implementation of the computing device. The computing device may be a first device or a second device.
As shown in fig. 5, a computing device may include: a display 501, a processor 502, a communication interface (Communications Interface) 504, a memory 506, and a communication bus 508.
Wherein: the display 501, processor 502, communication interface 504, and memory 506 communicate with each other via a communication bus 508. A communication interface 504 for communicating with a network element of the first device or the second device or other server, etc. The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the VR display method embodiment described above.
In particular, program 510 may include program code comprising computer-executable instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 506 for storing a program 510. Memory 506 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically adapted to be invoked by the processor 502 to cause a computing device to perform the relevant steps of the VR display method embodiments described above.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixels, the second display content is determined according to the parameter information of the pixels, and is displayed on the second screen of the second device, so that a user of the second device can watch the content watched by the user of the first device, other people except for a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
Embodiments of the present invention provide a computer readable storage medium storing at least one executable instruction that, when executed on a VR display system, causes the VR display system to perform the VR display method of any of the above method embodiments. The executable instructions may be particularly useful for causing a VR display system to perform operations in the VR display methods of the above embodiments.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixels, the second display content is determined according to the parameter information of the pixels, and is displayed on the second screen of the second device, so that a user of the second device can watch the content watched by the user of the first device, other people except for a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
The embodiment of the invention provides a VR display device for executing the VR display method.
Embodiments of the present invention provide a computer program that is callable by a processor to cause a computing device to perform the VR display method of any of the method embodiments described above.
An embodiment of the present invention provides a computer program product, including a computer program stored on a computer readable storage medium, the computer program including program instructions which, when run on a computer, cause the computer to perform the VR display method of any of the above-described method embodiments.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (9)

1. A VR display method, comprising:
the second device obtains first display content on a first screen of the first device, including: acquiring original content corresponding to the first display content, acquiring a user gesture, and processing the original content according to the user gesture; wherein the first display content on the first screen refers to the whole content currently displayed on the first screen; when the first screen comprises a plurality of first display screens, the first display content is the content on all the first display screens;
Determining a first starting pixel point and a first maximum pixel point in the first screen; acquiring a first original pixel point corresponding to the first initial pixel point in the original content, and acquiring a second original pixel point corresponding to the first maximum pixel point;
determining a display area in the original content according to the first original pixel point and the second original pixel point; acquiring parameter information of each pixel point in the display area, thereby acquiring the first display content; the first display content comprises parameter information of a plurality of pixel points;
the first device is a virtual reality display device, and the second device is a virtual reality display device or other electronic devices; the first equipment and the second equipment are communicated, including connection through a connecting wire or interconnection based on a communication network; after the first device establishes connection with the second device, the first device sends the first display content to the second device;
and the second device determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second device.
2. The method of claim 1, wherein the parameter information of the pixel point includes a pixel identification, a bias angle, and a color parameter;
the obtaining the parameter information of each pixel point in the display area specifically includes:
determining the pixel identification of the pixel point;
determining the inclined angle of the pixel point according to eyeball focus information of the first device;
and determining the color parameters of the pixel points according to the original content.
3. The method according to claim 2, wherein determining the second display content according to the parameter information of the pixel point specifically includes:
determining a second initial pixel point and a second maximum pixel point in the second screen;
determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point;
determining a rotation angle according to the inclined deviation angle of the pixel point;
and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate content as the second display content.
4. A method according to claim 3, wherein the determining the rotation angle according to the skew angle of the pixel point specifically includes:
the rotation angle is calculated according to the following formula:
α=θ*[(u 2 -u 1 )(v 2 -v 1 )]/[(x 2 -x 1 )(y 2 -y 1 )]
wherein, (x) 1 ,y 1 ) For the coordinates of the first original pixel point, (x) 2 ,y 2 ) And (u) is the coordinates of the second original pixel point 1 ,v 1 ) Is the coordinates of the second starting pixel point, (u) 2 ,v 2 ) And θ is the oblique angle, which is the coordinate of the second maximum pixel point.
5. The method according to any one of claims 1-4, further comprising:
the second device receives a transformation control instruction for an object in the second display content;
and the second equipment performs transformation processing on the objects in the second display content according to the transformation control instruction so as to update the second display content.
6. The method of claim 5, wherein the method further comprises:
the second device determines parameter information of the updated pixel point according to the updated second display content;
and the second device sends the parameter information of the updated pixel point to the first device so that the first device updates the first display content according to the parameter information of the updated pixel point.
7. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the VR display method of any one of claims 1-6.
8. A VR display system, comprising: the first device is connected with the second device, and comprises a first device and a second device which are connected through a connecting wire or are interconnected based on a communication network; the first device is a virtual reality display device, and the second device is a virtual reality display device or other electronic devices;
the second device is configured to:
acquiring first display content on a first screen of the first device, including: the first device obtains original content corresponding to the first display content, obtains a user gesture, and processes the original content according to the user gesture; wherein the first display content on the first screen refers to the whole content currently displayed on the first screen; when the first screen comprises a plurality of first display screens, the first display content is the content on all the first display screens;
Determining a first starting pixel point and a first maximum pixel point in the first screen; acquiring a first original pixel point corresponding to the first initial pixel point in the original content, and acquiring a second original pixel point corresponding to the first maximum pixel point;
determining a display area in the original content according to the first original pixel point and the second original pixel point; acquiring parameter information of each pixel point in the display area, thereby acquiring the first display content; the first display content comprises parameter information of a plurality of pixel points;
after the first device establishes connection with the second device, the first device sends the first display content to the second device;
and determining second display content according to the parameter information of the pixel point, and displaying the second display content on a second screen of the second device.
9. A computer-readable storage medium having stored therein at least one executable instruction that, when executed on a computing device, causes the computing device to perform the operations of the VR display method of any one of claims 1-6.
CN202010245436.2A 2020-03-31 2020-03-31 VR display method and system Active CN113467602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245436.2A CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245436.2A CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Publications (2)

Publication Number Publication Date
CN113467602A CN113467602A (en) 2021-10-01
CN113467602B true CN113467602B (en) 2024-03-19

Family

ID=77865596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245436.2A Active CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Country Status (1)

Country Link
CN (1) CN113467602B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881263A (en) * 2015-05-27 2015-09-02 天津三星电子有限公司 Display control method and display equipment
US9240069B1 (en) * 2015-06-30 2016-01-19 Ariadne's Thread (Usa), Inc. Low-latency virtual reality display system
WO2016021252A1 (en) * 2014-08-05 2016-02-11 ソニー株式会社 Information processing device, information processing method, and image display system
CN106375750A (en) * 2016-09-30 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Image display method and image display apparatus
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN108989785A (en) * 2018-08-22 2018-12-11 上海玮舟微电子科技有限公司 Naked eye 3D display method, apparatus, terminal and medium based on tracing of human eye
EP3425483A2 (en) * 2017-07-07 2019-01-09 Accenture Global Solutions Limited Intelligent object recognizer
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system
CN110544209A (en) * 2018-05-29 2019-12-06 京东方科技集团股份有限公司 Image processing method and equipment and virtual reality display device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018233733B2 (en) * 2017-03-17 2021-11-11 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016021252A1 (en) * 2014-08-05 2016-02-11 ソニー株式会社 Information processing device, information processing method, and image display system
CN104881263A (en) * 2015-05-27 2015-09-02 天津三星电子有限公司 Display control method and display equipment
US9240069B1 (en) * 2015-06-30 2016-01-19 Ariadne's Thread (Usa), Inc. Low-latency virtual reality display system
CN106375750A (en) * 2016-09-30 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Image display method and image display apparatus
EP3425483A2 (en) * 2017-07-07 2019-01-09 Accenture Global Solutions Limited Intelligent object recognizer
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN110544209A (en) * 2018-05-29 2019-12-06 京东方科技集团股份有限公司 Image processing method and equipment and virtual reality display device
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN108989785A (en) * 2018-08-22 2018-12-11 上海玮舟微电子科技有限公司 Naked eye 3D display method, apparatus, terminal and medium based on tracing of human eye
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system

Also Published As

Publication number Publication date
CN113467602A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US9892565B2 (en) Reprojection OLED display for augmented reality experiences
US10692274B2 (en) Image processing apparatus and method
US9747726B2 (en) Late stage reprojection
US20220301226A1 (en) Head-Mounted Display Device and Method Thereof
US11508087B2 (en) Texture-based pose validation
CN115643485B (en) Shooting method and electronic equipment
CN113467602B (en) VR display method and system
CN116341586B (en) Code scanning method, electronic equipment and storage medium
CN111524240A (en) Scene switching method and device and augmented reality equipment
US11341729B2 (en) Method and electronic device for correcting external reality pixels and virtual content pixels within an augmented reality environment
US11961184B2 (en) System and method for scene reconstruction with plane and surface reconstruction
CN114388056B (en) AR-based protein section generation method
US20240161340A1 (en) Calibrating camera in electronic device
US11935203B2 (en) Rotational navigation system in augmented reality environment
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
US20230418062A1 (en) Color calibration tool for see-through augmented reality environment
US20230419530A1 (en) Augmented reality image reproduction assistant
CN116007621A (en) Positioning mode switching method and device, intelligent wearable device and storage medium
US20220222923A1 (en) Improved object detection
CN115103148A (en) Video implementation method and device, electronic equipment and readable storage medium
CN116563740A (en) Control method and device based on augmented reality, electronic equipment and storage medium
WO2024011125A1 (en) Managing content across multiple windows
CN116228936A (en) Image processing method, device, terminal device, storage medium and program product
CN115690285A (en) Rendering parameter determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant