CN113467602A - VR display method and system - Google Patents

VR display method and system Download PDF

Info

Publication number
CN113467602A
CN113467602A CN202010245436.2A CN202010245436A CN113467602A CN 113467602 A CN113467602 A CN 113467602A CN 202010245436 A CN202010245436 A CN 202010245436A CN 113467602 A CN113467602 A CN 113467602A
Authority
CN
China
Prior art keywords
pixel point
display content
display
content
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010245436.2A
Other languages
Chinese (zh)
Other versions
CN113467602B (en
Inventor
朱海琦
徐林忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010245436.2A priority Critical patent/CN113467602B/en
Publication of CN113467602A publication Critical patent/CN113467602A/en
Application granted granted Critical
Publication of CN113467602B publication Critical patent/CN113467602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention relates to the technical field of terminals and discloses a VR display method and system. The method comprises the following steps: the method comprises the steps that second equipment obtains first display content on a first screen of first equipment, wherein the first display content comprises parameter information of a plurality of pixel points; and the second equipment determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second equipment. Through the manner, the embodiment of the invention can enable other people except the wearer of the VR glasses to watch the display content of the VR glasses.

Description

VR display method and system
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a VR display method and system.
Background
Virtual Reality (VR) glasses guide a user to be completely immersed in a Virtual environment by closing the human's vision and hearing to the outside through a glasses display, wherein the displayed content in the Virtual environment changes as the viewing angle of the human changes. With the continuous development of electronic technology, virtual reality glasses have allowed users to operate controls in a virtual screen through other peripheral devices (e.g., handles) according to their viewpoints and positions in a virtual environment.
At present, the display content of the VR device is generally only displayed to the wearer of the VR device for viewing, and people other than the wearer cannot view the display content of the VR glasses.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present invention provide a VR display method and system, which enable a person other than a wearer of a VR device to view display content of the VR device.
According to a first aspect of embodiments of the present invention, there is provided a VR display method including: the method comprises the steps that second equipment obtains first display content on a first screen of first equipment, wherein the first display content comprises parameter information of a plurality of pixel points; and the second equipment determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second equipment.
In an optional manner, the acquiring first display content on a first screen of the first device specifically includes: acquiring original content corresponding to the first display content; determining a first initial pixel point and a first maximum pixel point in the first screen; in the original content, acquiring a first original pixel point corresponding to the first original pixel point, and acquiring a second original pixel point corresponding to the first maximum pixel point; determining a display area in the original content according to the first original pixel point and the second original pixel point; and acquiring parameter information of each pixel point in the display area so as to acquire the first display content.
In an optional manner, the parameter information of the pixel point includes a pixel identifier, a skew angle, and a color parameter; the acquiring of the parameter information of each pixel point in the display area specifically includes: determining pixel identification of the pixel point; determining the inclination angle of the pixel point according to the eyeball focus information of the first equipment; and determining the color parameters of the pixel points according to the original content.
In an optional manner, the determining the second display content according to the parameter information of the pixel point specifically includes: determining a second initial pixel point and a second maximum pixel point in the second screen; determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point; determining a rotation angle according to the skew angle of the pixel point; and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as the second display content.
In an optional manner, the determining a rotation angle according to the skew angle of the pixel point specifically includes: the rotation angle is calculated according to the following formula: α ═ θ [ (u)2-u1)(v2-v1)]/[(x2-x1)(y2-y1)]
Wherein (x)1,y1) (x) is the coordinate of the first original pixel point2,y2) (u) is the coordinate of the second original pixel point1,v1) (u) is the coordinate of the second starting pixel point2,v2) And theta is the coordinate of the second maximum pixel point and is the skew angle.
In an optional manner, the method further comprises: the second device receiving a transformation control instruction for an object in the second display content; and the second equipment carries out conversion processing on the object in the second display content according to the conversion control instruction so as to update the second display content.
In an optional manner, the method further comprises: the second equipment determines the parameter information of the updated pixel points according to the updated second display content; and the second equipment sends the updated parameter information of the pixel points to the first equipment, so that the first equipment updates the first display content according to the updated parameter information of the pixel points.
In an alternative mode, the transformation control instruction comprises one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction and a color change control instruction.
According to a second aspect of an embodiment of the present invention, there is provided a VR display system, including: the system comprises a first device and a second device, wherein the first device is connected with the second device; the second device is to: acquiring first display content on a first screen of the first device, wherein the first display content comprises parameter information of a plurality of pixel points; and determining second display content according to the parameter information of the pixel point, and displaying the second display content on a second screen of the second device.
According to a third aspect of embodiments of the present invention, there is provided a VR display device, including: the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first display content on a first screen of first equipment, and the first display content comprises parameter information of a plurality of pixel points; and the display module is used for determining second display content according to the parameter information of the pixel point and displaying the second display content on a second screen of the second equipment.
According to a fourth aspect of embodiments of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the VR display method.
According to a fifth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having at least one executable instruction stored therein, the executable instruction causing a computing device to execute the VR display method described above.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixel points, the second display content is determined according to the parameter information of the pixel points, and the second display content is displayed on the second screen of the second device, so that a user of the second device can also watch the content watched by the user of the first device, other people except a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a schematic diagram of a VR display system in accordance with embodiments of the present invention;
FIG. 2 is a flow chart of a VR display method according to an embodiment of the invention;
FIG. 3 is a flow chart illustrating a VR display method according to another embodiment of the invention;
FIG. 4 is a schematic diagram of a VR display device according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
At present, the display content of the VR device is generally only displayed to the wearer of the VR device for viewing, and other people except the wearer cannot view the display content of the VR device, and other people cannot effectively communicate with the wearer for the content viewed by the wearer.
Accordingly, embodiments of the present invention provide a VR display method and system, which enable people other than a wearer of a VR device to view display content of the VR device.
Specifically, the embodiments of the present invention will be further explained below with reference to the drawings.
It should be understood that the following examples are provided by way of illustration and are not intended to limit the invention in any way to the particular embodiment disclosed.
Fig. 1 shows a schematic structural diagram of a VR display system according to an embodiment of the present invention. As shown in fig. 1, the VR display system 100 includes: a first device 110 and a second device 120.
Wherein the first device 110 and the second device 120 are capable of communicating with each other, for example, by being connected via a connection line or being interconnected via a communication network. The communication network may be a local area network or a wide area network (wan) switched by the relay device. When the communication network is a local area network, the communication network may be a wifi hotspot network, a wifi P2P network, a bluetooth network, a zigbee network, or a Near Field Communication (NFC) network. When the communication network is a wide area network, the communication network may be a third generation wireless telephone technology (3G) network, a fourth generation mobile communication technology (the 4)thgeneration mobile communication technology, 4G) network, fifth generation mobile communication technology (5)th-generation mobile communication technology, 5G) network, Public Land Mobile Network (PLMN) for future evolution, internet, or the like.
The first device 110 is a virtual reality Display device, such as a Head mounted virtual reality device (Head Mount Display), which is also called a virtual reality helmet, VR glasses, or glasses Display.
Specifically, the first device 110 includes a first screen 111. The first screen 111 includes a first display screen for displaying images, videos, and the like, so that when the user wears the first device 110 and views content on the first display screen through the first screen 111, there is an experience of being personally on the scene. The first display screen may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the first screen 111 may include one or more first display screens.
The second device 120 may be a virtual reality display device, and may also be other electronic devices. For example, the other electronic devices may be a Personal Computer (PC), a Personal Digital Assistant (PDA), a tablet Computer, a notebook Computer, a car Computer (carputer), a handheld game console, smart glasses, a smart watch, a drone, and the like.
Specifically, the second device 120 includes a second screen 121. The second screen 121 includes a second display screen for displaying images, videos, and the like. The second display screen may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the second screen 121 may include one or more second display screens.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation to the first device 110 and the second device 120. In other embodiments of the present application, first device 110 or second device 120 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of hardware and software.
In the present embodiment, the first device 110 is configured to determine first display content on a first screen 111 of the first device 110; the first device 110 is further configured to send the first display content to the second device 120; the second device 120 is configured to obtain first display content on a first screen of the first device 110, where the first display content includes parameter information of a plurality of pixel points; the second device 120 is further configured to determine second display content according to the parameter information of the pixel point, and display the second display content on a second screen 121 of the second device 120. Through the way, the embodiment of the invention can enable other people except a wearer of the VR equipment to watch the display content of the VR equipment.
It should be noted that the first device 110 may be simultaneously connected to one or more second devices 120, and both the first device 110 and each of the second devices 120 can implement the VR display method provided by the embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating a VR display method according to an embodiment of the present invention. The method may be applied to the VR display system 100 of fig. 1. As shown in fig. 2, the method includes:
step 210, the second device obtains first display content on a first screen of the first device, wherein the first display content includes parameter information of a plurality of pixel points.
The first display content on the first screen refers to all content currently displayed on the first screen. When the first screen comprises a plurality of first display screens, the first display content is the content on all the first display screens. The first device may send the first display content to the second device after the first device establishes a connection with the second device.
Of course, before the second device obtains the first display content on the first screen of the first device, the second device needs to be connected with the first device. Wherein the first device may pre-store a connectable device identification list, so that the first device may determine whether to accept the connection of the second device by looking up the device identification in the device identification list.
Wherein, step 210 specifically includes:
step 211, obtaining an original content corresponding to the first display content.
The original content corresponding to the first display content refers to a content resource from which the first display content can be obtained after the first device is processed, that is, the original content is unprocessed. The method further comprises: the method comprises the steps that the first equipment obtains original content, obtains user gestures, processes the original content according to the user gestures to obtain first display content, and sends the first display content to the second equipment. The original content may be sent to the first device by another device (e.g., the second device), or may be generated by the first device according to an image captured by the camera of the first device.
Step 212, determine a first starting pixel point and a first maximum pixel point in the first screen.
The first starting pixel point and the first maximum pixel point refer to two vertex points in the first screen. For example, when the first screen includes two first display screens and the first display screens are rectangular, the two display screens respectively display left and right eye images, the first starting pixel point may be a left upper vertex point of the left first display screen, and the first largest pixel point may be a right lower vertex point of the left first display screen, or the first largest pixel point may be a right lower vertex point of the right first display screen; for another example, when the first screen includes two first display screens and the first display screens are circular, the first start pixel point and the first maximum pixel point may be two vertices on one diameter of the first display screen. In this step, the determining a first starting pixel point and a first maximum pixel point in the first screen may specifically be: and acquiring a first screen coordinate system, and determining the coordinates of the first initial pixel point and the first maximum pixel point in the first screen coordinate system. For example, in the first screen coordinate system, the coordinates of the first start pixel point and the first maximum pixel point are determined to be (m) respectively1,n1)、(m2,n2). As another example, in a first screen coordinate system, a first starting pixel point and a first maximum are determinedThe coordinates of the pixel points are (0,0) and (m) respectively2,n2)。
Step 213, in the original content, a first original pixel point corresponding to the first original pixel point is obtained, and a second original pixel point corresponding to the first maximum pixel point is obtained.
The first display content displayed in the first screen is converted from part or all of the original content, so that the pixel point corresponding to the first display content can be found in the original content. Therefore, in this step, in the original content, a first original pixel point corresponding to the first original pixel point is obtained, and a second original pixel point corresponding to the first maximum pixel point is obtained, which may specifically be: and in the original content coordinate system, determining the coordinates of the first original pixel point and the second original pixel point according to the coordinates of the first original pixel point and the first maximum pixel point in the first screen coordinate system. For example, in the original content coordinate system, the first original pixel point and the second original pixel point are respectively determined to be (x)1,y1)、(x2,y2)。
And 214, determining a display area in the original content according to the first original pixel point and the second original pixel point.
In this step, a display area similar to the shape of the first screen can be determined in the original content according to the coordinates of the first original pixel point and the second original pixel point.
Step 215, obtaining parameter information of each pixel point in the display area, thereby obtaining the first display content.
The parameter information of the pixel point comprises a pixel identification, a skew angle and a color parameter. The pixel identification can be the coordinate of the pixel point in the original content coordinate system; the skew angle may be an offset angle of the pixel point from the center of the original content field of view; the color parameter may be an RGB value of the pixel point. Step 215 may specifically include:
step 2151, determining pixel identification of a pixel point;
step 2152, determining a skew angle of a pixel point according to eyeball focus information of the first device;
and step 2153, determining color parameters of the pixel points according to the original content.
The pixel identifier (x, y) of the pixel point can be determined according to the coordinate of the pixel point in the original content coordinate system. The eyeball focus information of the first device can be obtained by acquiring sensor information arranged on the first device, and then the offset angle between the pixel point and the center of the original content view field, namely the skew angle of the pixel point, can be determined according to the eyeball focus information of the first device. For example, if the original content field center coordinate is (0,0) and the pixel point is (x, y), the skew angle of the pixel point is θ, which is arctan (y/x). And determining the color parameters (R, G, B) of the pixel points according to the RGB values of the pixel points in the original content.
And step 220, the second device determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second device.
The second display content refers to content required to be displayed on a second screen of the second device. When the second screen includes a plurality of second display screens, the contents displayed on the respective second display screens may be the same.
Wherein, step 220 may specifically include:
step 221, determining a second initial pixel point and a second maximum pixel point in the second screen.
The second initial pixel point and the second maximum pixel point refer to two vertex points in the second screen. Step 221 may specifically be: and determining a second screen coordinate system, and determining the coordinates of a second initial pixel point and a second maximum pixel point in the second screen coordinate system.
Step 222, determining the intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point.
Wherein the intermediate content is the conversion of the first display content into the second display contentData in the process. In this step, the following steps may be specifically performed: determining the proportional relation of a first screen and a second screen according to the coordinates of a first original pixel point, a second initial pixel point and a second maximum pixel point, and determining the pixel identification of the pixel point of the intermediate content according to the pixel identification of the pixel point of the first display content and the proportional relation of the first screen and the second screen; and determining the color parameters of the pixel points of the intermediate content according to the corresponding relation between the pixel points of the first display content and the pixel points of the intermediate content. For example, assume that the coordinates of the first original pixel, the second original pixel, and the second maximum pixel are (x)1,y1)、(x2,y2)、(u1,v1)、(u2,v2) If the pixel identifier of the pixel of the first display content is (x, y), and the pixel identifier of the pixel of the intermediate content corresponding to (x, y) is (u, v), u is (u ═ v)2-u1)*x/(x2-x1),v=(v2-v1)*y/(y2-y1)。
And 223, determining the rotation angle according to the oblique deflection angle of the pixel point.
The rotation angle is calculated according to the following formula:
α=θ*[(u2-u1)(v2-v1)]/[(x2-x1)(y2-y1)]
wherein (x)1,y1) (x) is the coordinate of the first original pixel point2,y2) (u) is the coordinate of the second original pixel point1,v1) (u) is the coordinate of the second starting pixel point2,v2) And theta is the coordinate of the second maximum pixel point and is the skew angle.
And 224, rotating the intermediate content according to the rotation angle, and determining the rotated intermediate content as second display content.
And after the intermediate content is displayed according to the pixel identification and the color parameter of each pixel point of the intermediate content, rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as second display content.
In some other embodiments, the rotation angle of each pixel point in the intermediate content may also be determined, and each pixel point in the intermediate content may be displayed according to the corresponding rotation angle, thereby generating the second display content.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixel points, the second display content is determined according to the parameter information of the pixel points, and the second display content is displayed on the second screen of the second device, so that a user of the second device can also watch the content watched by the user of the first device, other people except a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
Fig. 3 is a flowchart illustrating a VR display method according to another embodiment of the present invention. The method may be applied to the VR display system 100 of fig. 1. The difference from the above embodiment is that, as shown in fig. 3, the method further includes:
step 310, the second device receives a transformation control instruction for the object in the second display content.
Wherein the user may trigger a transformation control instruction on the second device for the object in the second display content. The transformation control instruction may include one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction, and a color change control instruction, so as to perform operations such as moving, rotating, enlarging, reducing, changing color, and the like on the object in the second display content.
And step 320, the second device performs transformation processing on the object in the second display content according to the transformation control instruction so as to update the second display content.
And when the conversion control instruction is received, the second equipment performs corresponding conversion processing on the object in the second display content according to the conversion control instruction, so that the second display content is updated. For example, assuming that the conversion control instruction is a movement control instruction, coordinates of each pixel point representing the object are converted to move the object, and after the object is moved, the rotation angle of the object may be adaptively changed according to the current position of the object, so as to update the second display content.
And step 330, the second device determines the updated parameter information of the pixel point according to the updated second display content.
The updated parameter information of the pixel point may include an updated pixel identifier, an updated skew angle, and an updated color parameter. The second device may obtain parameter information of each updated pixel point according to the updated second display content.
Step 340, the second device sends the updated parameter information of the pixel point to the first device, so that the first device updates the first display content according to the updated parameter information of the pixel point.
The updating of the first display content by the first device according to the updated parameter information is the reverse process of step 220, and is not described here again.
The embodiment of the invention receives a transformation control instruction aiming at an object in second display content through the second equipment, transforms the object in the second display content according to the transformation control instruction to update the second display content, determines the parameter information of an updated pixel point according to the updated second display content, and transmits the parameter information of the updated pixel point to the first equipment, so that the first equipment updates the first display content according to the parameter information of the updated pixel point, thereby not only enabling a user of the second equipment to watch the content watched by the user of the first equipment, but also enabling other people except a wearer of the VR equipment to watch the display content of the VR equipment, enabling the display content of the second equipment and the display content of the first equipment to keep the same visual angle, and also enabling the user of the second equipment to control an object in the content watched by the user of the first equipment, enabling other people than the wearer of the VR device to interact with the wearer of the VR device.
Fig. 4 is a schematic structural diagram of a VR display apparatus according to an embodiment of the present invention. The apparatus may be applied to a second device. As shown in fig. 4, the apparatus 400 includes: an acquisition module 410 and a display module 420.
The obtaining module 410 is configured to obtain first display content on a first screen of a first device, where the first display content includes parameter information of a plurality of pixel points; the display module 420 is configured to determine second display content according to the parameter information of the pixel point, and display the second display content on a second screen of the second device.
In an optional manner, the obtaining module 410 is specifically configured to: acquiring original content corresponding to the first display content; determining a first initial pixel point and a first maximum pixel point in the first screen; in the original content, acquiring a first original pixel point corresponding to the first original pixel point, and acquiring a second original pixel point corresponding to the first maximum pixel point; determining a display area in the original content according to the first original pixel point and the second original pixel point; and acquiring parameter information of each pixel point in the display area so as to acquire the first display content.
In an optional manner, the parameter information of the pixel point includes a pixel identifier, a skew angle, and a color parameter; the obtaining module 410 is further specifically configured to: determining pixel identification of the pixel point; determining the inclination angle of the pixel point according to the eyeball focus information of the first equipment; and determining the color parameters of the pixel points according to the original content.
In an alternative manner, the display module 420 is specifically configured to: determining a second initial pixel point and a second maximum pixel point in the second screen; determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point; determining a rotation angle according to the skew angle of the pixel point; and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as the second display content.
In an alternative manner, the display module 420 is specifically configured to: the rotation angle is calculated according to the following formula:
α=θ*[(u2-u1)(v2-v1)]/[(x2-x1)(y2-y1)]
wherein (x)1,y1) (x) is the coordinate of the first original pixel point2,y2) (u) is the coordinate of the second original pixel point1,v1) (u) is the coordinate of the second starting pixel point2,v2) And theta is the coordinate of the second maximum pixel point and is the skew angle.
In an optional manner, the apparatus 400 further comprises a receiving module and a control module. The receiving module is used for receiving a transformation control instruction aiming at the object in the second display content; and the control module is used for carrying out conversion processing on the object in the second display content according to the conversion control instruction so as to update the second display content.
In an optional manner, the apparatus 400 further includes a parameter determining module and a sending module. The parameter determining module is used for determining the parameter information of the updated pixel points according to the updated second display content; the sending module is used for sending the updated parameter information of the pixel point to the first equipment, so that the first equipment updates the first display content according to the updated parameter information of the pixel point.
In an alternative mode, the transformation control instruction comprises one or more of a movement control instruction, a rotation control instruction, an enlargement control instruction, a reduction control instruction and a color change control instruction.
It should be noted that, the VR display apparatus provided by the embodiments of the present invention is an apparatus capable of executing the VR display method, and all embodiments of the VR display method are applicable to the apparatus and can achieve the same or similar beneficial effects.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired, wherein the first display content comprises the parameter information of a plurality of pixel points, the second display content is determined according to the parameter information of the pixel points, and the second display content is displayed on the second screen of the second device, so that the user of the second device can also watch the content watched by the user of the first device, other people except for a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device. The computing device may be the first device or the second device.
As shown in fig. 5, a computing device may include: a display 501, a processor 502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein: the display 501, processor 502, communication interface 504, and memory 506 communicate with each other via a communication bus 508. A communication interface 504 for communicating with a network element of the first device or the second device or another server or the like. The processor 502 is configured to execute the program 510, and may specifically execute the relevant steps in the VR display method embodiment described above.
In particular, program 510 may include program code comprising computer-executable instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically invoked by the processor 502 to cause the computing device to perform the relevant steps in the VR display method embodiments described above.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixel points, the second display content is determined according to the parameter information of the pixel points, and the second display content is displayed on the second screen of the second device, so that a user of the second device can also watch the content watched by the user of the first device, other people except a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores at least one executable instruction, and when the executable instruction is executed on a VR display system, the VR display system executes a VR display method in any of the above method embodiments. The executable instructions may be specifically configured to cause the VR display system to perform the operations in the VR display method in the above embodiments.
According to the embodiment of the invention, the first display content on the first screen of the first device is acquired through the second device, wherein the first display content comprises parameter information of a plurality of pixel points, the second display content is determined according to the parameter information of the pixel points, and the second display content is displayed on the second screen of the second device, so that a user of the second device can also watch the content watched by the user of the first device, other people except a wearer of the VR device can watch the display content of the VR device, and the display content of the second device and the display content of the first device can keep the same visual angle.
The embodiment of the invention provides a VR display device, which is used for executing the VR display method.
Embodiments of the present invention provide a computer program that can be invoked by a processor to cause a computing device to perform a VR display method in any of the above method embodiments.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions that, when run on a computer, cause the computer to perform a VR display method as in any of the above-described method embodiments.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. A VR display method, comprising:
the method comprises the steps that second equipment obtains first display content on a first screen of first equipment, wherein the first display content comprises parameter information of a plurality of pixel points;
and the second equipment determines second display content according to the parameter information of the pixel point, and displays the second display content on a second screen of the second equipment.
2. The method according to claim 1, wherein the acquiring first display content on a first screen of the first device specifically includes:
acquiring original content corresponding to the first display content;
determining a first initial pixel point and a first maximum pixel point in the first screen;
in the original content, acquiring a first original pixel point corresponding to the first original pixel point, and acquiring a second original pixel point corresponding to the first maximum pixel point;
determining a display area in the original content according to the first original pixel point and the second original pixel point;
and acquiring parameter information of each pixel point in the display area so as to acquire the first display content.
3. The method of claim 2, wherein the parameter information of the pixel point comprises a pixel identification, a skew angle, and a color parameter;
the acquiring of the parameter information of each pixel point in the display area specifically includes:
determining pixel identification of the pixel point;
determining the inclination angle of the pixel point according to the eyeball focus information of the first equipment;
and determining the color parameters of the pixel points according to the original content.
4. The method according to claim 3, wherein the determining the second display content according to the parameter information of the pixel point specifically includes:
determining a second initial pixel point and a second maximum pixel point in the second screen;
determining intermediate content according to the pixel identification of the pixel point, the color parameter of the pixel point, the first original pixel point, the second initial pixel point and the second maximum pixel point;
determining a rotation angle according to the skew angle of the pixel point;
and rotating the intermediate content according to the rotation angle, and determining the rotated intermediate display content as the second display content.
5. The method according to claim 4, wherein the determining the rotation angle according to the skew angle of the pixel point specifically includes:
the rotation angle is calculated according to the following formula:
α=θ*[(u2-u1)(v2-v1)]/[(x2-x1)(y2-y1)]
wherein (x)1,y1) (x) is the coordinate of the first original pixel point2,y2) (u) is the coordinate of the second original pixel point1,v1) (u) is the coordinate of the second starting pixel point2,v2) And theta is the coordinate of the second maximum pixel point and is the skew angle.
6. The method according to any one of claims 1-5, further comprising:
the second device receiving a transformation control instruction for an object in the second display content;
and the second equipment carries out conversion processing on the object in the second display content according to the conversion control instruction so as to update the second display content.
7. The method of claim 6, further comprising:
the second equipment determines the parameter information of the updated pixel points according to the updated second display content;
and the second equipment sends the updated parameter information of the pixel points to the first equipment, so that the first equipment updates the first display content according to the updated parameter information of the pixel points.
8. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations of the VR display method of any of claims 1-7.
9. A VR display system, comprising: the system comprises a first device and a second device, wherein the first device is connected with the second device;
the second device is to:
acquiring first display content on a first screen of the first device, wherein the first display content comprises parameter information of a plurality of pixel points;
and determining second display content according to the parameter information of the pixel point, and displaying the second display content on a second screen of the second device.
10. A computer-readable storage medium having stored therein at least one executable instruction that, when executed on a computing device, causes the computing device to perform operations of the VR display method of any of claims 1-7.
CN202010245436.2A 2020-03-31 2020-03-31 VR display method and system Active CN113467602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245436.2A CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245436.2A CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Publications (2)

Publication Number Publication Date
CN113467602A true CN113467602A (en) 2021-10-01
CN113467602B CN113467602B (en) 2024-03-19

Family

ID=77865596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245436.2A Active CN113467602B (en) 2020-03-31 2020-03-31 VR display method and system

Country Status (1)

Country Link
CN (1) CN113467602B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881263A (en) * 2015-05-27 2015-09-02 天津三星电子有限公司 Display control method and display equipment
US9240069B1 (en) * 2015-06-30 2016-01-19 Ariadne's Thread (Usa), Inc. Low-latency virtual reality display system
WO2016021252A1 (en) * 2014-08-05 2016-02-11 ソニー株式会社 Information processing device, information processing method, and image display system
CN106375750A (en) * 2016-09-30 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Image display method and image display apparatus
US20180268610A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN108989785A (en) * 2018-08-22 2018-12-11 上海玮舟微电子科技有限公司 Naked eye 3D display method, apparatus, terminal and medium based on tracing of human eye
EP3425483A2 (en) * 2017-07-07 2019-01-09 Accenture Global Solutions Limited Intelligent object recognizer
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system
CN110544209A (en) * 2018-05-29 2019-12-06 京东方科技集团股份有限公司 Image processing method and equipment and virtual reality display device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016021252A1 (en) * 2014-08-05 2016-02-11 ソニー株式会社 Information processing device, information processing method, and image display system
CN104881263A (en) * 2015-05-27 2015-09-02 天津三星电子有限公司 Display control method and display equipment
US9240069B1 (en) * 2015-06-30 2016-01-19 Ariadne's Thread (Usa), Inc. Low-latency virtual reality display system
CN106375750A (en) * 2016-09-30 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Image display method and image display apparatus
US20180268610A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
EP3425483A2 (en) * 2017-07-07 2019-01-09 Accenture Global Solutions Limited Intelligent object recognizer
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN110544209A (en) * 2018-05-29 2019-12-06 京东方科技集团股份有限公司 Image processing method and equipment and virtual reality display device
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN108989785A (en) * 2018-08-22 2018-12-11 上海玮舟微电子科技有限公司 Naked eye 3D display method, apparatus, terminal and medium based on tracing of human eye
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system

Also Published As

Publication number Publication date
CN113467602B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
CN112703464B (en) Distributed gaze point rendering based on user gaze
CN106803884B (en) Image processing apparatus
CN109743626B (en) Image display method, image processing method and related equipment
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
EP3729378B1 (en) Head-mounted display device and method thereof
KR102461232B1 (en) Image processing method and apparatus, electronic device, and storage medium
CN109741289B (en) Image fusion method and VR equipment
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
US20180219975A1 (en) Sharing Mediated Reality Content
US20190222744A1 (en) Image Photographing Method, Apparatus, and Terminal
CN108885856A (en) Information processing equipment, information processing method and program
JP2021185498A (en) Method for generating 3d object arranged in augmented reality space
CN114153348A (en) Cursor prompting method and host
CN115643485B (en) Shooting method and electronic equipment
CN107913519B (en) Rendering method of 2D game and mobile terminal
CN116341586B (en) Code scanning method, electronic equipment and storage medium
CN113467602B (en) VR display method and system
WO2017173583A1 (en) Terminal display anti-shake method and apparatus
JP2018028572A (en) Image processing device and image processing method
US20200066234A1 (en) VR Drawing Method, Device, and System
JP6472864B1 (en) Image distribution system and distribution image generation method
CN111524240A (en) Scene switching method and device and augmented reality equipment
CN114388056B (en) AR-based protein section generation method
US20240185463A1 (en) Head-Mounted Display Device and Method Thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant