CN112732088A - Virtual reality equipment and monocular screen capturing method - Google Patents

Virtual reality equipment and monocular screen capturing method Download PDF

Info

Publication number
CN112732088A
CN112732088A CN202110065017.5A CN202110065017A CN112732088A CN 112732088 A CN112732088 A CN 112732088A CN 202110065017 A CN202110065017 A CN 202110065017A CN 112732088 A CN112732088 A CN 112732088A
Authority
CN
China
Prior art keywords
display
virtual reality
screen capture
interface
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110065017.5A
Other languages
Chinese (zh)
Other versions
CN112732088B (en
Inventor
郑美燕
王大勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110065017.5A priority Critical patent/CN112732088B/en
Publication of CN112732088A publication Critical patent/CN112732088A/en
Priority to PCT/CN2021/137060 priority patent/WO2022151883A1/en
Application granted granted Critical
Publication of CN112732088B publication Critical patent/CN112732088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a virtual reality device and a monocular screen capture method, and the method can be used for extracting unrendered image information and saving the unrendered image information as a screen capture picture file aiming at image content displayed by a display on one side after a user inputs a control instruction for screen capture, so that the monocular screen capture operation is completed. Because the extracted image information is not subjected to distortion processing, the difference between the content of the picture file obtained by screen capture and the actually displayed virtual scene content is small, and the problem that the difference between the image obtained by screen capture and the actual content is too large in the traditional virtual reality equipment can be solved.

Description

Virtual reality equipment and monocular screen capturing method
Technical Field
The application relates to the technical field of virtual reality equipment, in particular to virtual reality equipment and a monocular screen capturing method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving a person a sense of environmental immersion. A virtual reality device is a device that uses virtual display technology to present a virtual picture to a user. Generally, a virtual reality device includes two display screens for presenting virtual picture contents, corresponding to left and right eyes of a user, respectively. When the contents displayed by the two display screens are respectively from the images of the same object from different visual angles, the stereoscopic viewing experience can be brought to the user.
In actual use, the virtual reality device can output the displayed content in the form of pictures through screen capture operation, so as to perform network sharing or display on other display devices. For example, communication connection can be established between the virtual reality device and the smart phone, and a picture file obtained by screen capture is sent to the smart phone, so that the content displayed on the virtual reality device is stored and displayed at the smart phone end.
However, in order to adapt to the optical components, the VR device may be subjected to distortion processing in the edge area of the displayed image, and therefore, directly performing the screen capture operation under the display content of the VR device may capture the distorted image, that is, the image obtained by the screen capture has distortion and is too different from the actual content. In addition, when the VR device displays a three-dimensional image, because two display screens exist and the contents displayed on the two display screens are different, the image content obtained by directly performing screen capture and the virtual scene content actually displayed have a large difference.
Disclosure of Invention
The application provides virtual reality equipment and a monocular screen capturing method, and aims to solve the problem that the difference between an image obtained by a traditional screen capturing method and actually displayed virtual scene content is large.
In one aspect, the present application provides a virtual reality device comprising a display and a controller, wherein the display comprises a left display and a right display for presenting a user interface suitable for viewing by a left eye and a user interface suitable for viewing by a right eye, respectively. The controller is configured to perform the following program steps:
receiving a control instruction for screen capture input by a user;
responding to the control instruction, and acquiring image information to be rendered of a display in a preset direction; the preset direction display is one of a left display or a right display;
and saving the image information as a screen capture picture file.
On the other hand, the application also provides a bullet screen capturing method, which can be applied to the virtual reality device and specifically comprises the following steps:
receiving a control instruction for screen capture input by a user;
responding to the control instruction, and acquiring image information to be rendered of a display in a preset direction; the preset direction display is one of a left display or a right display;
and saving the image information as a screen capture picture file.
According to the technical scheme, after the user inputs the control instruction for screen capture, the virtual reality device and the monocular screen capture method can extract the unrendered image information and store the unrendered image information as the screen capture picture file according to the image content displayed by the display at one side, and accordingly monocular screen capture operation is completed. Because the extracted image information is not subjected to distortion processing, the difference between the content of the picture file obtained by screen capture and the actually displayed virtual scene content is small, and the problem that the difference between the image obtained by screen capture and the actual content is too large in the traditional virtual reality equipment can be solved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the present application;
FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the application;
FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application shortcut operation entry area of a global interface in an embodiment of the present application;
FIG. 5 is a schematic diagram of a suspension of a global interface in an embodiment of the present application;
FIG. 6 is a diagram illustrating a VR frame in an embodiment of the present application;
FIG. 7 is a schematic diagram of a monocular screen capturing process in an embodiment of the present application;
FIG. 8 is a flow chart illustrating extraction of texture image information according to an embodiment of the present application;
FIG. 9 is a schematic flowchart illustrating an embodiment of a process for storing a picture file;
FIG. 10 is a flowchart illustrating a screen capture operation performed according to a user interface type in an embodiment of the present application;
fig. 11 is a schematic flowchart of detecting a user interface type in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In the embodiment of the present application, the virtual Reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including but not limited to VR glasses, Augmented Reality (AR) devices, VR game devices, mobile computing devices, other wearable computers, and the like. The virtual reality device 500 may operate independently or may be connected to other intelligent display devices as an external device, where the display devices may be smart televisions, computers, tablet computers, servers, and the like.
The virtual reality device 500 may be worn behind the face of the user, and display a media image to provide close-range images for the eyes of the user, so as to provide an immersive experience. To present the asset display, virtual reality device 500 may include a number of components for displaying the display and facial wear. Taking VR glasses as an example, the virtual reality device 500 may include a housing, temples, an optical system, a display assembly, a posture detection circuit, an interface circuit, and the like. In practical application, the optical system, the display component, the posture detection circuit and the interface circuit can be arranged in the shell to present a specific display picture; the two sides of the shell are connected with the temples so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to a processing element such as a controller, and the processing element can adjust specific picture content in the display assembly according to the detected gesture data.
It should be noted that the manner in which the specific screen content is presented varies according to the type of the virtual reality device 500. For example, as shown in fig. 1, for a part of thin and light VR glasses, a built-in controller generally does not directly participate in a control process of displaying content, but sends gesture data to an external device, such as a computer, and the external device processes the gesture data, determines specific picture content to be displayed in the external device, and then returns the specific picture content to the VR glasses, so as to display a final picture in the VR glasses.
In some embodiments, the virtual reality device 500 may access the display device 200, and a network-based display system is constructed between the virtual reality device 500 and the server 400, so that data interaction may be performed among the virtual reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific picture content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the virtual reality device 500 also perform data communication with the server 400 by a plurality of communication methods. The display device 200 and the virtual reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 100A and the remote controller 100B. The mobile terminal 100A and the remote controller 100B may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some embodiments, the mobile terminal 100A and the remote controller 100B may communicate with the display device 200 through a direct connection manner such as bluetooth, infrared, or the like. When transmitting the control instruction, the mobile terminal 100A and the remote controller 100B may directly transmit the control instruction data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 100A and the remote controller 100B may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When sending the control command, the mobile terminal 100A and the remote controller 100B may send the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 100A and the remote controller 100B to directly interact with the virtual reality device 500, for example, the mobile terminal 100A and the remote controller 100B may be used as handles in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. In order to present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because the user can observe the picture content by the left and right eyes, the user can observe a display picture with strong stereoscopic impression when wearing the glasses.
The optical system in the virtual reality device 500 is an optical module consisting of a plurality of lenses. The optical system is arranged between the eyes of a user and the display screen, and can increase the optical path through the refraction of the lens on the optical signal and the polarization effect of the polaroid on the lens, so that the content displayed by the display assembly can be clearly displayed in the visual field range of the user. Meanwhile, in order to adapt to the eyesight of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance between the lenses is changed, the optical path is changed, and the definition of a picture is adjusted.
The interface circuit of the virtual reality device 500 may be configured to transmit interactive data, and in addition to the above-mentioned transmission of the gesture data and the display content data, in practical applications, the virtual reality device 500 may further connect to other display devices or peripherals through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output a displayed screen to the display device in real time for display. As another example, the virtual reality device 500 may also be connected to a handle via an interface circuit, and the handle may be operated by a user's hand, thereby performing related operations in the VR user interface.
Wherein the VR user interface may be presented as a plurality of different types of UI layouts according to user operations. For example, the user interface may include a global UI, as shown in fig. 2, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring the TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include services with media asset contents such as 2D movies, education courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program application, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of the media assets and the titles, as shown in FIG. 3.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting the configuration of an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some embodiments, the service class extension area 2 may include the content of: movie & TV, education, tourism, application, my. In some embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of configuration, as shown in fig. 3.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 4.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to an application or displays a designated function page after receiving the confirmation operation, as shown in fig. 5. In some embodiments, the suspension may not be configured with jump links, and is used solely for image presentation.
In some embodiments, the global UI further comprises a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and holding a focus and dragging.
The user may enter different scene interfaces through the global interface, for example, as shown in FIG. 6, the user may enter the browse interface at a "browse interface" entry in the global interface, or initiate the browse interface by selecting any of the assets in the global interface. In the browsing interface, the virtual reality device 500 may create a 3D scene through the Unity 3D engine and render specific screen content in the 3D scene.
In the browsing interface, a user can watch specific media asset content, and in order to obtain better viewing experience, different virtual scene controls can be further arranged in the browsing interface so as to cooperate with the media asset content to present specific scenes or realize real-time interaction. For example, in a browsing interface, a panel may be set in a Unity 3D scene to present picture content, and be matched with other home virtual controls to achieve the effect of a cinema screen.
The virtual reality device 500 may present the operation UI content in a browsing interface. For example, a list UI may be displayed in front of the display panel in the Unity 3D scene, a media asset icon stored locally by the current virtual reality device 500 may be displayed in the list UI, or a network media asset icon playable in the virtual reality device 500 may be displayed. The user can select any icon in the list UI, and the selected media assets can be displayed in real time in the display panel.
In some embodiments, the user may perform a screen capture operation on the screen content displayed in the virtual reality device 500. The screen capture operation is a process of generating a screen capture picture file according to display contents so as to save a specific display picture at a certain moment. During the screen capturing operation, the user can input a control command for screen capturing. For example, the user may press the power key and the volume "+" key simultaneously to input a control command, and control the virtual reality device 500 to perform a screen capture action. After receiving the control instruction, the display device 500 may intercept the display screen and store the display screen as a picture file.
According to different picture display methods, the modes of generating pictures by screen capture operations are different. For example, if a two-dimensional image is displayed, after receiving a control instruction input by a user, the content currently displayed on the screen may be directly captured as the content of the target screenshot file. However, for the virtual reality device 500, it may be used to display not only two-dimensional images, but also three-dimensional images in use. Due to different display modes of the two-dimensional image and the three-dimensional image, the difference between a target screenshot obtained by direct screen capture and an actual image in the screen capture process is too large.
The concrete points are as follows: to present the stereoscopic effect, the left and right displays of the virtual reality device 500 show slightly different content, i.e., corresponding to the content taken by the shown film source left and right cameras, respectively. This results in that the content directly captured on the screen during the screen capture process may include the content displayed on the left display and the content displayed on the right display, and the two display contents are different and cannot directly output the screen capture picture file.
And since the virtual reality apparatus 500 has an optical component built therein, the optical component is an optical path adjusting system composed of a plurality of lenses. Most of the lenses of the optical assembly are circular, but the left display and the right display of the virtual reality device 500 are rectangular, so that the edge portion of the display screen viewed by the user when using the virtual reality device 500 is deformed, which affects the viewing effect of the user. In order to obtain better viewing experience, when the virtual reality device 500 displays the picture content, the displayed content may be distorted first, and the pattern content displayed at the edge position may be distorted in advance according to the lens deformation rule of the optical component, so as to counteract the deformation caused by the optical component.
Therefore, if the screenshot image file is generated by a direct screen capture mode, the captured image content is the content displayed on the screen, and the content displayed on the screen is the image processed by distortion, which has a large deviation from the original content of the image.
In order to be able to capture the content displayed by the virtual reality device 500 and obtain a captured picture file that is the same as or less different from the original content. As shown in fig. 7, in some embodiments of the present application, a monocular screen capturing method is provided, which is applicable to a virtual reality device 500, and specifically includes the following:
s1: and receiving a control instruction for screen capture input by a user.
During the use of the virtual reality device 500, a user may input various control commands, some of which may be used to initiate a screen capture function. In use, the virtual reality device 500 can provide a user with a variety of input means for control commands. In some embodiments, a power key, a volume key, a menu key, and the like may be provided on the virtual reality device 500, and a user may input a screen capture control command through a single key or a combination of keys. For example, the user may simultaneously press a power key and a volume "+" key combination to enter control commands for screen capture.
In some embodiments, for the virtual reality device 500 with fewer keys, the input of the control instruction may also be implemented by an external device of the virtual reality device 500. For example, a user may perform an interactive action with a collocated handle device during use of the virtual reality device 500. And inputs a control instruction for screen capturing through a function key or a combination of keys on the handle device.
In some embodiments, the user may also complete the input of the screen capture control command through other interactive manners. For example, a smart voice system may be built in the operating system of the virtual reality device 500, and the user may input a voice "screen shot" through a built-in or external microphone device and input a control command for screen shooting.
It should be noted that the above example of the input mode of the control instruction is only a partial example of the user inputting the screen capture control instruction in the present application, and different input modes of the control instruction may also be adopted according to different interaction modes supported by the virtual reality device 500. Therefore, other input methods that can be suggested by those skilled in the art based on the input method of the present application are within the scope of the present application.
S2: and responding to the control instruction, and acquiring the texture image information to be rendered of the display in the preset direction.
After acquiring the control instruction input by the user, the virtual reality device 500 may start to perform a screen capture operation in response to the input control instruction. Different from the traditional way of directly performing screen capture on display content, in the screen capture operation performed by the virtual reality device 500 in the present application, only one picture content in two displays may be extracted, that is, the display in the preset direction is one of the left display and the right display, for example, it may be specified to perform screen capture on the content displayed by the left display, so as to obtain a single screen capture picture file.
In order to eliminate the influence of distortion on the contents of the screenshot picture, the virtual reality apparatus 500 may directly extract texture image information to be rendered after acquiring the control instruction. The texture image information includes the picture content that has not been rendered (i.e., has not been subjected to the distortion processing), and may be the image content generated by directly shooting the generated unity 3D scene by the virtual camera.
The virtual camera is a camera set based on a unity 3D scene, and can be used for shooting a virtual object in the unity 3D scene so as to generate a texture image. The virtual camera may mimic the eyes of a user, capture imagery from a unity 3D scene, and display the captured imagery in left and right displays. Thus, the virtual reality device 500 may set up two virtual cameras in a unity 3D scene to mimic the user's left and right eyes, respectively. The two virtual cameras can meet specific positional relationships in the unity 3D scene, so that the unity 3D scene is shot at different angles, and different texture image information is output, so that the left display and the right display form the 3D immersion effect.
In the actual film watching process, the texture image information output by the two virtual cameras can be sent to the left display and the right display for display after being subjected to rendering processing. Since the distortion processing occurs during the rendering processing, in order to obtain an undeformed picture file for output, in the present embodiment, the texture image information may be directly extracted after the screenshot control instruction is acquired.
S3: and saving the texture image information as a screen capture picture file.
After acquiring the texture image information, the virtual reality apparatus 500 may save the texture image information as a file in a picture format and store in a folder dedicated to screen capture. For example, the virtual reality apparatus 500 may save the texture image information as a picture file in the png format and save the picture file in a folder under the "DCIM/Camera" path.
According to the technical scheme, the monocular screen capturing method provided in the embodiment can extract texture image information corresponding to the single-side display, obtain display content which is the same as the display content in the current virtual scene and is not subjected to distortion processing, and store the display content into the picture file to complete the screen capturing operation. Therefore, the monocular screen capturing method provided by the embodiment can adapt to the display modes of the two displays of the virtual reality device 500, eliminate the influence of distortion processing on the screen capturing pattern, and reduce the difference between the screen capturing picture content and the actual virtual scene picture.
In some embodiments, in order to determine whether the user inputs a control instruction for screen capturing, in the step of receiving the control instruction for screen capturing input by the user, the virtual reality device 500 may further set a detection program to detect an interactive action input by the user in real time, that is, detect a key combination action input by the user. And if the key combination action is the same as the preset screen capture key combination action, automatically generating a control instruction for screen capture.
The memory of the virtual reality device 500 may store a detection program for detecting a key combination action input by the user, and the detection program may be triggered to run when the user presses any key. Each key action input by the user can be detected through the detection program, so that whether the key action is the same as the preset screen capture key combination action is determined. For example, a preset screen capture key combination action is used for simultaneously pressing a power key and a volume "+" key, and when a user simultaneously presses the power key and the volume "+" key, it is detected that the key combination action is the same as the preset screen capture key combination action, which is equivalent to that the user inputs a control instruction for screen capture, so that the control instruction for screen capture can be generated.
Similarly, if the key combination action is different from the preset screen capture key combination action, the control instruction for screen capture is not generated, and other functions are executed according to the interaction rule of the operating system. For example, when the user presses only the power key, it is detected that the key combination action is different from the preset screen capture key combination action, and therefore, a control instruction for screen capture is not generated, but other functions, that is, turning off or lighting up the display of the virtual reality device 500, are performed according to the interaction mode set in the operating system.
In some embodiments, in order to obtain the texture image information, the virtual reality device 500 may further include a texture image information obtaining program, which may monitor the control command input by the user in real time and capture the image information after detecting that the control command is input by the user. That is, as shown in fig. 8, the step of obtaining the texture image information to be rendered of the display with the preset direction further includes:
s211: acquiring a display data stream;
s212: recording the input time of the control instruction;
s213: texture image information is extracted in the display data stream.
In the process of using the virtual reality device 500, the virtual reality device 500 may render a virtual scene, i.e., a unity 3D scene, in real time by using a unity 3D engine. In the unity 3D scene, two virtual cameras are further provided, and the virtual cameras can imitate the eyes of the user to perform shooting in the unity 3D scene, so that image information is generated according to the virtual object images in the unity 3D scene. The virtual camera may be connected to a pose sensor in the virtual reality device 500 to adjust a shooting angle by the pose sensor, and may output a display data stream composed of multiple frames of images as the virtual camera continuously shoots a unity 3D scene. Namely, the display data stream is obtained by shooting a unity 3D scene by a virtual camera corresponding to the display in a preset direction.
The virtual reality device 500 may capture a display data stream captured and output by the virtual camera for transmission to a display for display. When the user inputs the screen capture control command, the virtual reality device 500 may further record the input time of the control command, and extract texture image information in the display data stream according to the input time, so that the texture image information is a corresponding frame image of the input time in the display data stream.
For example, when the virtual reality apparatus 500 acquires a screen capture control instruction input by the user at time 22/10/2020/11: 20:12:51, the input time is recorded, and the frame image that needs to be displayed at time 22/11: 20:12: 000/10/2020/is extracted as texture image information in the display data stream. Since the video frame rate corresponding to the display data stream captured by the virtual camera is a fixed value, there may be no corresponding frame image at a part of the time. For example, for a display data stream with a frame rate of 24, if the first frame image corresponds to a time of 11:20:12:000, the second frame image corresponds to a time of 11:20:12:042, and thus when the input time of the control instruction is 11:20:12:020, there is no corresponding frame image at that time. At this time, the frame image having the smallest time difference from the current time may be set as the corresponding frame image, i.e., when the input time is 11:20:12:020, the first frame image at the time of 11:20:12:000 may be extracted as texture image information.
In a conventional screen capture mode, captured picture files can be uniformly stored in one folder, for example, a screenshot file of an Android system is placed in a folder with a storage path of 'DCIM/Camera'. Therefore, in order to ensure consistency of the screenshot saving positions, as shown in fig. 9, in some embodiments, the step of obtaining the texture image information to be rendered of the display with the preset direction further includes:
s221: storing the texture image information as a Byte array;
s222: traversing a file saving path of the current system;
s223: if the current system comprises a file saving path of a preset position, saving the Byte array as a picture file;
s224: and if the current system does not comprise a file saving path of a preset position, newly building a folder at the preset position.
After the virtual reality device 500 extracts the texture image information, the texture image information may be stored in the form of Byte array, so that the virtual reality device 500 may perform other control actions related to the screen capture operation during the process, so that the picture file obtained by the screen capture may be stored in a predetermined manner.
After storing the texture image information as a Byte array, the virtual reality apparatus 500 may traverse a file saving path of the current system to determine whether a folder for saving a screenshot picture file exists in the current system. For example, the virtual reality device 500 may read the folder names in the current system registry entry one by one to determine whether a folder with a file path of "DCIM/Camera" is included in the current system. If the folder exists in the current system, the Byte array can be saved as a picture file, and the saving of the screen capture picture is completed. If the folder does not exist in the current system, a folder with a path of 'DCIM/Camera' can be newly created to be used for saving the screenshot picture file.
It should be noted that, according to different operating systems, paths through which the user saves the screenshot picture file are also different. Therefore, in different virtual reality devices 500, file saving paths at different positions may also be preset. And the user can customize the file saving path according to different purposes of the screenshot picture file so as to be checked by the user. For example, for a screenshot picture output to the display device 200 for display, the file saving path may be directly set as an address of the display device 200, so that after the screenshot operation is completed, the picture file obtained by the screenshot is directly sent to the display device 200 according to the set address.
In order to enable a user to view a screenshot picture file, in some embodiments, the step of saving the texture image information as a picture file further comprises: and detecting the saving process of the screenshot picture file, generating a database updating instruction if the screenshot picture file is saved, and operating the database updating instruction so that the screenshot picture file can be displayed in the picture browsing interface.
Since the amount of texture image information is large, the virtual reality device 500 may consume a certain amount of time to write data into the storage space, so as to save the texture image information as a picture file. Therefore, in the process that the virtual reality device 500 starts to save the texture image information as the screenshot picture file, the saving progress of the screenshot file can be detected in real time. And after the screen capture picture file is saved, the database is informed to update the picture information, so that the user can see the latest screen capture picture in the picture browsing interface.
The picture browsing interface is also called a picture browser, and is an interface specially used for a user to view pictures. The picture browsing interface can comprise thumbnail icons of a plurality of pictures, and when a user selects the thumbnail, the picture corresponding to the icon can be opened and displayed in an enlarged mode in the picture display area.
In order to inform the database of the update of the picture information, after the fact that the screen capture picture file is completely saved is detected, an update instruction can be generated and applied to a database management program. The database management program can scan the currently stored picture information after receiving the updating instruction, and compare the scanned picture information with the last scanning result, so as to determine whether the newly added picture file information exists. When the information of the newly added file exists, the database management program can display the information corresponding to the newly added file in the picture browser according to the time sequence of the newly added file, so that the user can select and view the information.
It can be known from the above technical solutions that the above embodiments can save the texture image information as a picture file by controlling the storage manner of the texture image information. The path for saving the picture can be the same as the position saved in the conventional screenshot mode, and the database can be informed to update the picture information after the saving is finished, so that a user can conveniently check the picture file acquired by screenshot.
When the screenshot picture file is saved, because the texture image information data volume is large, the whole saving process not only occupies system operation resources, but also the saving time consumption is long, so if the saving process is completed through a main thread of an operating system level, the application is blocked, and the user experience is influenced. In order to improve the user experience, in some embodiments of the present application, as shown in fig. 10, after receiving a control instruction for screen capture input by a user, the virtual reality device 500 may further execute the following method:
s231: detecting a current user interface type;
s232: if the current user interface type is the first type interface, performing screen capture on the display content through a main thread of an operating system layer;
s233: and if the current user interface type is the second type interface, executing the step responding to the control instruction through the coroutine of the Unity layer.
The virtual reality device 500 may present different types of interfaces in use, such as a two-dimensional scene interface, e.g., a configuration interface, a control interface, etc., and a three-dimensional scene interface, e.g., a browsing interface, a playing interface, etc. The contents displayed by the left and right displays are the same in the display process of part of the interfaces, and the displayed contents are directly displayed in the displays without distortion processing. In this embodiment, such an interface that has the same content and is not subjected to distortion processing as the left and right displays is referred to as a first type interface. For the first type of interface, a screen capture image can be obtained in a direct screen capture mode in the screen capture process. Other types of user interfaces than the first type of interface are referred to as second type of interfaces. For the second type of interface, in the process of screen capture, the texture image information of the one-side display can be obtained in the manner described in the above embodiment, and stored as a screen capture picture file. For example, the first type of interface is a 2D interface, and the second type of interface is a 3D interface or a 360 panoramic interface.
Obviously, the data processing amount for the way of obtaining the screenshot image in the direct screenshot way is generally smaller than the data processing amount for the way of saving the texture image information as the picture file. Therefore, in this embodiment, after the screen capture control instruction input by the user is acquired, the current user interface type can be detected, so that different screen capture modes can be selected according to different user interface types.
For example, when the current user interface type is detected to be the first type interface, the main thread of the operating system layer can be used for directly performing screen capture on the display content to obtain a screen capture picture file; and when the current user interface type is detected to be the second type interface, executing the screenshot method in the embodiment by the Unity layer co-program, namely executing the step of responding to the control instruction by the Unity layer co-program, acquiring the texture image information to be rendered of the display in the preset direction, and saving the texture image information as a screenshot picture file to acquire the screenshot picture file.
As can be seen, in this embodiment, by detecting the current user interface type, the main thread of the operating system layer or the coroutine of the Unity layer can be selected to execute the screen capture operation according to different user interface types, so that the data processing amount of the main thread is shared by the coroutine under the condition of a large data processing amount, and the application jam is reduced. And under the condition of direct screen capture with smaller data processing amount, the main thread still directly completes screen capture operation so as to quickly complete screen capture operation.
It should be noted that, when it is detected that the current user interface type is the second type of user interface, a screenshot event may be sent to the Unity layer through the operating system layer, so as to notify the Unity layer to perform a screenshot operation. For example, for the virtual reality device 500 of the android system, after the operating system receives the event, if it is detected that the current user interface type is the second type interface, the android layer does not execute the screenshot operation any more, but notifies the Unity layer to execute the screenshot operation, and after the Unity layer receives the screenshot event, the screenshot is performed according to a process of extracting the texture image information of the designated side display and storing the texture image information as a screenshot picture file, so that the application deadlock is reduced.
In the above embodiment, in order to detect the current user interface type, the virtual reality device 500 may read the current user interface type through the operation information of the current interface, or read the user interface type through the currently played media asset file source type. For example, by reading the running information in the task manager, it is determined that the process that is currently running in the system is the setting interface, and then it is determined that the current user interface type is the first type interface. For another example, when the type of the film source of the currently played media asset file is read and the type of the film source of the currently played media asset file is read to be a 2D film source, it is determined that the type of the current user interface is the first type interface.
However, in a partial playback mode, the type of the current user interface may not be accurately detected simply by the operation information and the film source type, for example, when playing a two-dimensional picture file in an analog cinema application, although the played media asset file type is a 2D film source, since the analog cinema adds 3D virtual picture content in the playback process, the final picture content presented by the analog cinema is 3D image content with different left and right display contents and distortion. Therefore, in order to accurately detect the current user interface type, as shown in fig. 11, in some embodiments, the step of detecting the current user interface type further comprises:
s2311: comparing the user interface images displayed by the left display and the right display;
s2312: if the user interface images displayed by the left display and the right display are the same, marking the current user interface type as a first type interface;
s2313: and if the user interface images displayed by the left display and the right display are different, marking the current user interface type as a second type interface.
After acquiring the screen capture control instruction input by the user, the virtual reality device 500 may extract the content of the user interface displayed by the left display and the right display at the same time, and compare the content of the user interface images displayed on both sides. The specific comparison mode can be completed by comparing the color values of all or part of the pixel points in the user interface image one by one.
For example, by traversing each pixel point of the user interface image displayed on the left and the user interface image displayed on the right display, and comparing the pixel values of each pixel point one by one, when the color values of the corresponding positions of all the pixel points are the same, it is indicated that the image contents displayed on the left display and the right display are the same, that is, it is determined that the current user interface is of the first type; similarly, when the color values of the pixel points at partial positions are determined to be different through comparison, the image contents displayed by the left display and the right display are different, namely, the type of the current user interface is determined to be the second type of interface.
Therefore, in the embodiment, by comparing whether the display contents in the left display and the right display are the same or not, the currently displayed user interface type can be accurately detected, so that misjudgment caused when the user interface type is judged through the running information or the film source type information is relieved, and screen capture operation is favorably realized.
Based on the monocular screen capturing method, some embodiments of the present application further provide a virtual reality device 500, as shown in fig. 7, where the virtual reality device 500 includes a display and a controller, where the display includes a left display and a right display, and is configured to display a user interface; the controller is configured to perform the following program steps:
receiving a control instruction for screen capture input by a user;
responding to the control instruction, and acquiring texture image information to be rendered of a display in a preset direction; the preset direction display is one of a left display or a right display;
and saving the texture image information as a screen capture picture file.
As can be seen from the foregoing technical solutions, the virtual reality device 500 provided in the foregoing embodiments can extract, after the user inputs a control instruction of screen capture, the unrendered texture image information displayed on the display at the designated side, and store the extracted texture image information as a screen capture picture file, thereby completing the monocular screen capture operation. Because the extracted texture image information is not subjected to distortion processing, the difference between the image content obtained by screen capture and the actually displayed virtual scene content is small, and the problem of large image difference obtained by the traditional screen capture method is solved.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display, including a left display and a right display, configured to display a user interface;
a controller configured to:
receiving a control instruction for screen capture input by a user;
responding to the control instruction, and acquiring image information to be rendered of a display in a preset direction; the preset direction display is one of a left display or a right display;
and saving the image information as a screen capture picture file.
2. The virtual reality device of claim 1, wherein in the step of receiving a user-input control instruction for screen capture, the controller is further configured to:
detecting a key combination action input by a user;
and if the key combination action is the same as the preset screen capture key combination action, generating a control instruction for screen capture.
3. The virtual reality device of claim 1, wherein in the step of obtaining image information to be rendered for the display in a preset direction, the controller is further configured to:
storing the image information as a Byte array;
traversing a file saving path of the current system;
if the current system comprises a file saving path of a preset position, saving the Byte array as a picture file;
and if the current system does not comprise a file saving path of a preset position, newly building a folder at the preset position.
4. The virtual reality device of claim 1, wherein in the step of saving the image information as a picture file, the controller is further configured to:
detecting a saving process of the screenshot picture file;
if the screen capture picture file is stored, generating a database updating instruction;
and operating the database updating instruction to enable the screenshot picture file to be displayed in a picture browsing interface.
5. The virtual reality device of claim 1, wherein after the step of receiving a user-entered control instruction for screen capture, the controller is further configured to:
detecting a current user interface type;
if the current user interface type is the first type interface, performing screen capture on the display content through a main thread of an operating system layer;
and if the current user interface type is the second type interface, executing the step responding to the control instruction through the coroutine of the Unity layer.
6. The virtual reality device of claim 5, wherein the first type of interface is a 2D interface; the second type of interface is a 3D interface or a 360 panorama interface, and in the step of detecting the current user interface type, the controller is further configured to:
comparing the user interface images displayed by the left display and the right display;
if the user interface images displayed by the left display and the right display are the same, marking the current user interface type as a first type interface;
and if the user interface images displayed by the left display and the right display are different, marking the current user interface type as a second type interface.
7. The virtual reality device of claim 5, wherein if the current user interface type is a second type of interface, the controller is further configured to:
sending, by the operating system layer, a screenshot event to the Unity layer to notify the Unity layer to perform steps in response to the control instruction.
8. The virtual reality device of claim 1, wherein in the step of obtaining image information to be rendered for the display in a preset direction, the controller is further configured to:
acquiring a display data stream, wherein the display data stream is obtained by shooting a unity 3D scene by a virtual camera corresponding to a display in a preset direction;
recording the input time of the control instruction;
extracting texture image information from the display data stream, wherein the texture image information is a corresponding frame image of the input time in the display data stream.
9. A monocular screen capturing method is applied to virtual reality equipment, wherein the virtual reality equipment comprises a left display and a right display, and the monocular screen capturing method comprises the following steps:
receiving a control instruction for screen capture input by a user;
responding to the control instruction, and acquiring image information to be rendered of a display in a preset direction; the preset direction display is one of a left display or a right display;
and saving the image information as a screen capture picture file.
10. The monocular screen capturing method of claim 9, wherein the step of receiving a control command for screen capturing input by a user comprises:
detecting a current user interface type;
if the current user interface type is the first type interface, performing screen capture on the display content through a main thread of an operating system layer;
and if the current user interface type is the second type interface, executing the step responding to the control instruction through the coroutine of the Unity layer.
CN202110065017.5A 2021-01-18 2021-01-18 Virtual reality equipment and monocular screen capturing method Active CN112732088B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110065017.5A CN112732088B (en) 2021-01-18 2021-01-18 Virtual reality equipment and monocular screen capturing method
PCT/CN2021/137060 WO2022151883A1 (en) 2021-01-18 2021-12-10 Virtual reality device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110065017.5A CN112732088B (en) 2021-01-18 2021-01-18 Virtual reality equipment and monocular screen capturing method

Publications (2)

Publication Number Publication Date
CN112732088A true CN112732088A (en) 2021-04-30
CN112732088B CN112732088B (en) 2023-01-20

Family

ID=75592193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110065017.5A Active CN112732088B (en) 2021-01-18 2021-01-18 Virtual reality equipment and monocular screen capturing method

Country Status (1)

Country Link
CN (1) CN112732088B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359225A (en) * 2008-08-29 2009-02-04 北京大学 Cooperation control system for underwater multi-robot
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
CN106293395A (en) * 2016-08-03 2017-01-04 深圳市金立通信设备有限公司 A kind of virtual reality glasses and interface alternation method thereof
CN106844017A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 The method and apparatus that event is processed for Website server
US20170301123A1 (en) * 2016-04-18 2017-10-19 Beijing Pico Technology Co., Ltd. Method and apparatus for realizing boot animation of virtual reality system
WO2018000609A1 (en) * 2016-06-30 2018-01-04 乐视控股(北京)有限公司 Method for sharing 3d image in virtual reality system, and electronic device
CN107861629A (en) * 2017-12-20 2018-03-30 杭州埃欧哲建设工程咨询有限公司 A kind of practice teaching method based on VR
CN108093060A (en) * 2017-12-26 2018-05-29 陈占辉 A kind of method for pushing of Intelligent housing background system message
CN108885348A (en) * 2016-04-04 2018-11-23 三星电子株式会社 Device and method for generating the portable image equipment of application image
CN109002248A (en) * 2018-08-31 2018-12-14 歌尔科技有限公司 VR scene screenshot method, equipment and storage medium
CN109840946A (en) * 2017-09-19 2019-06-04 腾讯科技(深圳)有限公司 Virtual objects display methods and device
CN110505471A (en) * 2019-07-29 2019-11-26 青岛小鸟看看科技有限公司 One kind wearing display equipment and its screen capture method, apparatus
CN112156464A (en) * 2020-10-22 2021-01-01 腾讯科技(深圳)有限公司 Two-dimensional image display method, device and equipment of virtual object and storage medium
CN112188087A (en) * 2020-09-10 2021-01-05 北京为快科技有限公司 Panoramic video screenshot method and device, storage medium and computer equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359225A (en) * 2008-08-29 2009-02-04 北京大学 Cooperation control system for underwater multi-robot
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
CN106844017A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 The method and apparatus that event is processed for Website server
CN108885348A (en) * 2016-04-04 2018-11-23 三星电子株式会社 Device and method for generating the portable image equipment of application image
US20170301123A1 (en) * 2016-04-18 2017-10-19 Beijing Pico Technology Co., Ltd. Method and apparatus for realizing boot animation of virtual reality system
WO2018000609A1 (en) * 2016-06-30 2018-01-04 乐视控股(北京)有限公司 Method for sharing 3d image in virtual reality system, and electronic device
CN106293395A (en) * 2016-08-03 2017-01-04 深圳市金立通信设备有限公司 A kind of virtual reality glasses and interface alternation method thereof
CN109840946A (en) * 2017-09-19 2019-06-04 腾讯科技(深圳)有限公司 Virtual objects display methods and device
CN107861629A (en) * 2017-12-20 2018-03-30 杭州埃欧哲建设工程咨询有限公司 A kind of practice teaching method based on VR
CN108093060A (en) * 2017-12-26 2018-05-29 陈占辉 A kind of method for pushing of Intelligent housing background system message
CN109002248A (en) * 2018-08-31 2018-12-14 歌尔科技有限公司 VR scene screenshot method, equipment and storage medium
CN110505471A (en) * 2019-07-29 2019-11-26 青岛小鸟看看科技有限公司 One kind wearing display equipment and its screen capture method, apparatus
CN112188087A (en) * 2020-09-10 2021-01-05 北京为快科技有限公司 Panoramic video screenshot method and device, storage medium and computer equipment
CN112156464A (en) * 2020-10-22 2021-01-01 腾讯科技(深圳)有限公司 Two-dimensional image display method, device and equipment of virtual object and storage medium

Also Published As

Publication number Publication date
CN112732088B (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN110636353B (en) Display device
CN113064684B (en) Virtual reality equipment and VR scene screen capturing method
CN112732089A (en) Virtual reality equipment and quick interaction method
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN112073798B (en) Data transmission method and equipment
WO2020248697A1 (en) Display device and video communication data processing method
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN112929750B (en) Camera adjusting method and display device
CN114363705A (en) Augmented reality equipment and interaction enhancement method
CN114286077B (en) Virtual reality device and VR scene image display method
WO2022193931A1 (en) Virtual reality device and media resource playback method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
WO2022151883A1 (en) Virtual reality device
WO2022151882A1 (en) Virtual reality device
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
WO2020248682A1 (en) Display device and virtual scene generation method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
CN114283055A (en) Virtual reality equipment and picture display method
CN116126175A (en) Virtual reality equipment and video content display method
CN112667079A (en) Virtual reality equipment and reverse prompt picture display method
CN114327032A (en) Virtual reality equipment and VR (virtual reality) picture display method
CN116931713A (en) Virtual reality equipment and man-machine interaction method
CN116132656A (en) Virtual reality equipment and video comment display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant