CN114327700A - Virtual reality equipment and screenshot picture playing method - Google Patents

Virtual reality equipment and screenshot picture playing method Download PDF

Info

Publication number
CN114327700A
CN114327700A CN202110284754.4A CN202110284754A CN114327700A CN 114327700 A CN114327700 A CN 114327700A CN 202110284754 A CN202110284754 A CN 202110284754A CN 114327700 A CN114327700 A CN 114327700A
Authority
CN
China
Prior art keywords
picture
virtual reality
screenshot
screen capture
reality device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110284754.4A
Other languages
Chinese (zh)
Inventor
郑美燕
王大勇
姜璐珩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to PCT/CN2021/137060 priority Critical patent/WO2022151883A1/en
Publication of CN114327700A publication Critical patent/CN114327700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Abstract

The application provides a virtual reality device and a screenshot picture playing method, wherein the method can be used for analyzing a screenshot picture file after a control instruction input by a user is acquired, so that a screenshot image picture and a view angle are acquired; and calling the player model according to the field angle so as to map the screen capture image picture onto the player model. The screenshot picture playing method can display the screenshot picture by using a player model. The player model has the shape of a display area under a corresponding field angle, so that the whole field range can be filled with the screenshot picture in the playing process, and confusion of the sky box picture and the screenshot picture is relieved. And moreover, a screenshot picture scene is restored through a curved surface model adaptive to the screenshot picture, so that better immersion experience is obtained.

Description

Virtual reality equipment and screenshot picture playing method
The present application claims priority of chinese patent application entitled "a virtual reality device and a fast interaction method" filed by chinese patent office on 18/1/2021 with application number 202110065015.6, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a screenshot picture playing method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving a person a sense of environmental immersion. A virtual reality device is a device that uses virtual display technology to present a virtual picture to a user. Generally, a virtual reality device includes two display screens for presenting virtual picture contents, corresponding to left and right eyes of a user, respectively. When the contents displayed by the two display screens are respectively from the images of the same object from different visual angles, the stereoscopic viewing experience can be brought to the user.
When the picture is displayed on the virtual reality equipment, the virtual reality equipment can complete picture playing through a specific playing interface. The playing interface is composed of a picture imaging area and a sky box, the imaging area can display multimedia contents such as pictures, and the sky box is used for presenting a rendering scene background of the playing interface, so that the immersive experience is provided for users.
In actual use, the virtual reality device can output the displayed content in the form of pictures through screen capture operation, so that a user view picture at the screen capture moment is displayed on the display device or the virtual reality device. In the image obtained by the screen capture of the virtual reality equipment, the screen capture picture not only comprises interface picture content, but also comprises rendered scene picture content, so that when the virtual reality equipment displays the screen capture picture in an imaging area, the interface picture and the rendered scene picture can be displayed simultaneously, and the rendered scene picture and the sky box picture content are not coordinated, namely, the played picture is disordered, and better immersion experience cannot be obtained.
Disclosure of Invention
The application provides virtual reality equipment and a screenshot picture playing method, and aims to solve the problem that a rendered picture displayed by a traditional screenshot picture playing method is inconsistent with a sky box picture.
In a first aspect, the present application provides a virtual reality device comprising a display and a controller. The display is used for displaying user interfaces such as a playing interface, the user interfaces comprise a picture imaging area and a sky box area, and the picture imaging area is used for presenting multimedia contents; the sky box area is located around the image area and used for presenting rendering background content. The controller is configured to perform the following program steps:
acquiring a control instruction which is input by a user and used for playing the screen capture picture;
responding to the control instruction, and analyzing the screenshot picture file to be played to obtain a screenshot image picture and a field angle;
calling a player model according to the field angle, wherein the player model is a curved surface model which is built in a rendering scene in advance according to the field angle;
mapping the screen shot image to the player model.
In a second aspect, the present application further provides a display device comprising a display and a controller. The display is used for displaying user interfaces such as a playing interface, the user interfaces comprise a picture imaging area and a sky box area, and the picture imaging area is used for presenting multimedia contents; the sky box area is located around the image area and used for presenting rendering background content. The controller is configured to perform the following program steps:
receiving a screen capture instruction input by a user;
responding to the screen capture instruction, and performing image shooting on the current rendering scene to obtain a screen capture image picture;
recording a shooting position and a field angle in the rendered scene when image shooting is performed;
and saving the screen capture image picture and the field angle to generate a screen capture picture file.
In a third aspect, the present application further provides a screenshot playing method applied to the virtual reality device, where the screenshot playing method includes:
acquiring a control instruction which is input by a user and used for playing the screen capture picture;
responding to the control instruction, and analyzing the screenshot picture file to be played to obtain a screenshot image picture and a field angle;
calling a player model according to the field angle, wherein the player model is a curved surface model which is built in a rendering scene in advance according to the field angle;
mapping the screen shot image to the player model.
In a fourth aspect, the present application further provides a screenshot generating method, which is applied to the virtual reality device, where the screenshot generating method includes:
receiving a screen capture instruction input by a user;
responding to the screen capture instruction, and performing image shooting on the current rendering scene to obtain a screen capture image picture;
recording a shooting position and a field angle in the rendered scene when image shooting is performed;
and saving the screen capture image picture and the field angle to generate a screen capture picture file.
According to the technical scheme, the virtual reality equipment and the screenshot picture playing method can analyze the screenshot picture file after the control instruction input by the user is acquired, so that a screenshot image picture and a view angle are acquired; and calling the player model according to the field angle so as to map the screen capture image picture onto the player model. The screenshot picture playing method can display the screenshot picture by using a player model. The player model has the shape of a display area under a corresponding field angle, so that the whole field range can be filled with the screenshot picture in the playing process, and confusion of the sky box picture and the screenshot picture is relieved. And moreover, a screenshot picture scene is restored through a curved surface model adaptive to the screenshot picture, so that better immersion experience is obtained.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the present application;
FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the application;
FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application shortcut operation entry area of a global interface in an embodiment of the present application;
FIG. 5 is a schematic diagram of a suspension of a global interface in an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating entering a shortcut center through a status bar in an embodiment of the present application;
FIG. 7 is a schematic diagram of a shortcut center window in the embodiment of the present application;
FIG. 8 is a schematic diagram illustrating entering a shortcut center through a key in an embodiment of the present application;
FIG. 9 is a schematic view of a screen shot beginning in an embodiment of the present application;
FIG. 10 is a diagram illustrating a prompt text window when a screen capture is successful in the embodiment of the present application;
FIG. 11 is a diagram illustrating the display effect of a screenshot of a conventional image browsing interface;
FIG. 12 is a flowchart illustrating a method for playing a screenshot picture according to an embodiment of the present application;
FIG. 13 is a schematic diagram illustrating a playing effect of a screenshot picture in an embodiment of the present application;
FIG. 14 is a schematic flow chart illustrating matching of player models according to an embodiment of the present application;
FIG. 15 is a schematic flow chart illustrating the creation of a player model according to an embodiment of the present application;
FIG. 16 is a schematic view of spherical coordinates in an embodiment of the present application;
FIG. 17 is a diagram illustrating a texture mapping process according to an embodiment of the present application;
FIG. 18 is a schematic diagram illustrating a mapping relationship between normal vector coordinates and texture coordinates of a point on a spherical surface according to an embodiment of the present disclosure;
fig. 19 is a schematic flowchart illustrating a process of parsing a screenshot picture file to be played according to an embodiment of the present application;
FIG. 20 is a flowchart illustrating an example of outputting a screenshot;
fig. 21 is a schematic view of a rendered scene when a screenshot image is output according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In the embodiment of the present application, the virtual Reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including but not limited to VR glasses, Augmented Reality (AR) devices, VR game devices, mobile computing devices, other wearable computers, and the like. The technical solutions of the embodiments of the present application are described by taking VR glasses as an example, and it should be understood that the provided technical solutions can be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or may be connected to other intelligent display devices as an external device, where the display devices may be smart televisions, computers, tablet computers, servers, and the like.
The virtual reality device 500 may be worn on the face of the user, display a media image, and provide close-range images for the eyes of the user to bring an immersive experience. To present the asset display, virtual reality device 500 may include a number of components for displaying the display and facial wear. Taking VR glasses as an example, the virtual reality device 500 may include, but is not limited to, at least one of a housing, a facial fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and the like. In practical application, the optical system, the display component, the posture detection circuit and the interface circuit can be arranged in the shell to present a specific display picture; the face fixing piece is connected to the shell both sides to wear in user's face.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to a processing element such as a controller, and the processing element can adjust specific picture content in the display assembly according to the detected gesture data.
In some embodiments, the virtual reality device 500 shown in fig. 1 may access the display device 200, and construct a network-based display system with the server 400, and data interaction may be performed among the virtual reality device 500, the display device 200, and the server 400 in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific picture content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display apparatus 200 may provide a broadcast receiving television function and may additionally provide an intelligent network television function of a computer support function, including but not limited to a network television, an intelligent television, an Internet Protocol Television (IPTV), and the like.
The display device 200 and the virtual reality device 500 also perform data communication with the server 400 by a plurality of communication methods. The display device 200 and the virtual reality device 500 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
In the course of data interaction, the user may operate the display apparatus 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 in a direct wireless connection manner or in an indirect connection manner. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner such as bluetooth, infrared, etc. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may directly transmit the control command data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display apparatus 200 through a wireless router to establish indirect connection communication with the display apparatus 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as a handle in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include, but are not limited to, a display screen and drive circuitry associated with the display screen. In order to present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because the user can observe the picture content by the left and right eyes, the user can observe a display picture with strong stereoscopic impression when wearing the glasses.
The optical system in the virtual reality device 500 is an optical module consisting of a plurality of lenses. The optical system is arranged between the eyes of a user and the display screen, and can increase the optical path through the refraction of the lens on the optical signal and the polarization effect of the polaroid on the lens, so that the content displayed by the display assembly can be clearly displayed in the visual field range of the user. Meanwhile, in order to adapt to the eyesight of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance between the lenses is changed, the optical path is changed, and the definition of a picture is adjusted.
The interface circuit of the virtual reality device 500 may be configured to transmit interactive data, and in addition to the above-mentioned transmission of the gesture data and the display content data, in practical applications, the virtual reality device 500 may further connect to other display devices or peripherals through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output a displayed screen to the display device in real time for display. As another example, the virtual reality device 500 may also be connected to a handle via an interface circuit, and the handle may be operated by a user's hand, thereby performing related operations in the VR user interface.
Wherein the VR user interface may be presented as a plurality of different types of UI layouts according to user operations. For example, the user interface may include a global UI, as shown in fig. 2, after the AR/VR terminal is started, the global UI may be displayed in a display screen of the AR/VR terminal or a display of the display device. The global UI may include, but is not limited to, at least one of a recommended content area 1, a business class extension area 2, an application shortcut operation entry area 3, and a suspended matter area 4.
The recommended content area 1 is used for configuring the TAB columns of different classifications; media resources, special subjects and the like can be selected and configured in the column; the media assets can include but are not limited to services with media asset contents such as 2D movies, educational courses, tourism, 3D, 360-degree panorama, live broadcast, 4K movies, program applications, games, tourism and the like, and the columns can select different template styles and can support simultaneous recommendation and arrangement of media assets and titles, as shown in fig. 3.
In some embodiments, a status bar may be further disposed at the top of the recommended content area 1, and a plurality of display controls may be disposed in the status bar, including but not limited to common options such as time, network connection status, power, and the like. The content included in the status bar may be customized by the user, for example, content such as weather, user's head portrait, etc. may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on the time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be directly displayed as specific time text information, and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining condition of the virtual reality device 500.
The status bar is used to enable the user to perform common control operations, enabling rapid setup of the virtual reality device 500. Since the setup program for the virtual reality device 500 includes many items, all commonly used setup options are typically not displayed in their entirety in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion option is selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further set in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "quick center" option may be set in the expansion window. After the user clicks the shortcut center option, the virtual reality device 500 may display a shortcut center window. The shortcut center window may include, but is not limited to, "screen capture", "screen recording", and "screen projection" options for waking up the corresponding functions, respectively.
The service class extension area 2 supports extension classes configuring different classes. And if the new service type exists, supporting the configuration of an independent TAB and displaying the corresponding page content. The expanded classification in the service classification expanded area 2 can also perform sequencing adjustment and offline service operation on the expanded classification. In some embodiments, the service class extension area 2 may include the content of: movie & TV, education, tourism, application, my. In some embodiments, the business category extension area 2 is configured to expose a large business category TAB and support more categories for configuration, which is illustrated in support of configuration, as shown in fig. 3.
The application shortcut operation entry area 3 can specify that pre-installed applications are displayed in front for operation recommendation, and support to configure a special icon style to replace a default icon, wherein the pre-installed applications can be specified in a plurality. In some embodiments, the application shortcut operation entry area 3 further includes a left-hand movement control and a right-hand movement control for moving the option target, for selecting different icons, as shown in fig. 4.
The suspended matter region 4 may be configured above the left oblique side or above the right oblique side of the fixed region, may be configured as an alternative character, or is configured as a jump link. For example, the flotage jumps to an application or displays a designated function page after receiving the confirmation operation, as shown in fig. 5. In some embodiments, the suspension may not be configured with jump links, and is used solely for image presentation.
In some embodiments, the global UI further comprises a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the icon is selected by the handheld controller, the icon displays a character prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after the search icon is selected, the search icon displays the characters including "search" and the original icon, and after the icon or the characters are further clicked, the search icon jumps to a search page; for another example, clicking the favorite icon jumps to the favorite TAB, clicking the history icon default location display history page, clicking the search icon jumps to the global search page, clicking the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a return button; a main page key, and the long press of the main page key can realize the reset function; volume up-down buttons; and the touch area can realize the functions of clicking, sliding, pressing and holding a focus and dragging.
The user can perform an interactive operation through the global UI interface and jump to a specific interface in a partial interactive mode. For example, to implement playing of the asset data, a user may start playing the asset file corresponding to the asset link by clicking any asset link icon in the global UI interface, and at this time, the virtual reality device 500 may control to jump to the asset playing interface.
After jumping to a specific interface, the virtual reality device 500 may further display a status bar at the top of the playing interface, and execute a corresponding setting function according to a set interaction manner. For example, as shown in fig. 6, when the virtual reality device 500 plays a video asset, if a user wants to perform a screen capture operation on a screen of the asset, the user may call up an expansion window by clicking an expansion option on a status bar, click a shortcut center option in the expansion window, so that the virtual reality device 500 displays the shortcut center window on a playing interface as shown in fig. 7, and finally click a "screen capture" option in the expansion center window, so that the virtual reality device 500 performs the screen capture operation, and stores a display screen at the current time in an image manner.
The status bar can be hidden when the virtual reality device 500 plays the media asset picture, so as to avoid blocking or interfering with the media asset picture. And when the user performs a particular interaction, the display is triggered. For example, the status bar may be hidden when the user is not performing an action using the handle, and displayed when the user is performing an action using the handle. To this end, the virtual reality device 500 may be configured to detect a state of an orientation sensor in the handle or a state of any button while playing a media asset picture, and may control to display a status bar on the top of the playing interface when detecting that a change occurs in a detection value of the orientation sensor or the button is pressed. And when detecting that the orientation sensor is not changed within the set time or the button is not pressed, controlling to hide the status bar in the playing interface.
Therefore, in the embodiment, the user can call the shortcut center through the status bar, and click the corresponding option in the shortcut center window to complete screen capture, screen recording and screen projection operations. The shortcut center window can also call the shortcut center in other interactive modes and display the shortcut center window. For example, as shown in FIG. 8, the user may invoke the quick hub window by double-clicking the home key on the handle.
And after the user can select any icon in the shortcut center window, starting the corresponding function. The starting mode of the corresponding function may be determined according to the actual interaction mode of the virtual reality device 500. For example, as shown in fig. 9, after the user calls the shortcut center window, the user may move the handle downward to move the focus mark to the screenshot option of the shortcut center window, and then start the operation of the screenshot function by pressing the "OK/OK" key on the handle.
After the above-described screen capture function is started, the virtual reality device 500 may call a screen capture operation program from a memory, and execute screen capture on a currently displayed screen by running the screen capture program. For example, the virtual reality device 500 may perform overlay synthesis on the display contents of all layers by running a screen capture program to generate a picture file of the current display pattern. The generated picture file may be stored according to a predetermined storage path.
Since the virtual reality device 500 includes two displays, corresponding to the left and right eyes of the user, respectively. When a part of the media asset pictures are displayed, in order to obtain a stereoscopic viewing effect, the contents displayed by the two displays respectively correspond to the left virtual playing camera and the right virtual playing camera in the 3D scene, namely, the pictures displayed in the two displays are slightly different. Therefore, when the screen capture operation is performed, different screens can obtain screen capture pictures with different contents.
For this reason, the virtual reality apparatus 500 may detect a form of a picture displayed at the time of screen capture when performing screen capture, and may perform screen capture on pictures displayed on the left display and the right display respectively when detecting that the user uses the 3D mode, that is, output two screen capture pictures through one screen capture operation. However, since the difference between the contents displayed by the left and right displays in the 3D mode is small, and a part of users do not need 2 screen capturing pictures, in order to save the storage space of the virtual reality device 500, in some embodiments, when the screen capturing operation is performed, the screen capturing program may further designate to perform screen capturing on one of the two displays, for example, designate to perform screen capturing on the content displayed by the left display, so as to obtain one screen capturing picture, and store the screen capturing picture.
In some embodiments, after completing the storage of the screenshot picture, the virtual reality device 500 may further display a prompt content in the displayed interface, for example, as shown in fig. 10, a prompt text window (toast window) may be displayed in a floating manner on the play interface, including the text that "screenshot is successful, and saved to" xx ", where" xx "is a specific saving path. Obviously, the prompt text window can be automatically canceled from being displayed after being displayed for a certain time, so as to avoid excessive shielding of the play interface, and the prompt text window is displayed after the screen capture is successful and disappears after being displayed for 2 s.
And the prompt text window can also dynamically change the specific prompt text content according to the saving process of the screenshot picture. For example, after the user determines to perform the screen capture function operation, "screen capture is successfully saving the screen capture picture" is displayed through the prompt text window, and "saved to xxx" is displayed through the prompt text window after saving is completed.
It should be noted that, because the user generally does not want the screenshot image to include the shortcut center interface when performing the screenshot operation, in order to capture the played media content, after the user clicks the screenshot icon, the shortcut center window may be hidden.
In some embodiments, after the screen capture operation is completed, the screen capture result may be displayed on the play interface, that is, a display window is displayed in a floating manner on an upper layer of the play interface, and a screen capture picture is presented in the display window for a user to view. Further, in the process of presenting the screenshot picture, some drawing tool options, such as a line drawing tool, an oval tool, a rectangular tool, a text tool, and the like, may also be displayed in the display window, and the user may perform processes such as blocking, labeling, and cropping on the screenshot picture by clicking the drawing tools, so as to output a better screenshot picture result.
As can be seen, in the above embodiment, the virtual reality device 500 may perform the screen capture operation quickly through the shortcut center window or the shortcut key, so as to save the screen capture picture according to the content displayed by the virtual reality device 500. The screen capture objects of the screen capture operation can be different according to different application scenes. For example, the virtual reality device 500 may capture a screen of content displayed in a display, or may capture a partial region of a rendered scene.
When playing the media assets, the virtual reality device 500 may render the media asset picture, that is, set a display panel in the rendering scene for presenting the content of the media asset picture. And virtual objects such as seats, sounds and the like are added to form a virtual scene, so that effects such as a simulated cinema and a family scene are output according to the virtual scene. At this time, if the virtual reality device 500 performs screen capture on the display content, the picture obtained by screen capture includes not only the media asset picture but also the rendered virtual object picture.
The virtual reality device 500 may also capture a screen presented by a display panel in the rendered scene, i.e., may capture only the content of the asset screen. The specific screen capturing method may be to perform a screen capturing operation on a display panel picture area in the rendered scene, or the virtual reality device 500 may directly extract the media asset picture frame data after parsing the media asset data, and copy the extracted frame data, thereby obtaining a picture of a virtual object without the rendered virtual article.
In some embodiments, the virtual reality device 500 may also perform a screen shot of a portion of the region in the rendered scene. For example, when the user wears the virtual reality device 500 and moves to any viewing angle, screen capturing may be performed on the rendered screen content in the display panel area and/or the vicinity at the current viewing angle, thereby obtaining screen capturing screen content in the highlight area or the user setting area.
After the user performs the screen capture operation in the above manner, the screen capture picture can be obtained. The captured screenshot may be played or presented in the virtual reality device 500 or other display device. When the screenshot picture is played by using the virtual reality device 500, the virtual reality device 500 may present the screenshot picture contents through the browsing interface.
The browsing interface is an interface that is rendered by a rendering engine of the virtual reality device 500 and then output to a display, and includes but is not limited to a picture display area, an operation area, and a background area. Each region can obtain specific picture content by rendering a scene. For example, the picture content corresponding to the picture display area may be a screenshot picture content that is played and is displayed on a display panel set in the rendered scene in real time; the picture content corresponding to the operation area is a UI model set in a rendering scene and is used for executing corresponding user operation; the picture content corresponding to the background area is a picture pattern presented by the Sedum box, and may include a plane pattern content picture or other model picture contents set in the rendered scene. The sky box is background picture content set by the virtual reality device 500 for presenting various interfaces, and may be a black background, a solid background, or other backgrounds with specific patterns.
Since the display panel is in a planar shape, the picture display area in the browsing interface can also be in a planar shape. The operation area can present different picture contents according to different browsing interfaces, and the background area presents different pictures according to the type of the sky box. For example, as shown in fig. 11, in order to simulate an indoor viewing scene, an indoor home screen such as a sofa may be displayed in the background area. It can be seen that, in the browsing interface shown in fig. 11, when the screenshot picture includes content of a rendered scene picture (such as a desk, a chair, a ground, a background wall, and the like), the rendered scene picture is not suitable for a sky box picture, so that the picture is disordered and a user cannot have a better immersion experience.
In some embodiments of the present application, a virtual reality device 500 is also provided, including but not limited to a display and a controller, for a better immersive experience. The display is used for displaying a playing interface, and the controller can run a screenshot picture playing method program, so that the screenshot picture is played according to the screenshot picture playing method. That is, as shown in fig. 12, the controller is configured to perform the following program steps:
s1: and acquiring a control instruction which is input by a user and used for playing the screen capture picture.
The user can output control instructions for various purposes through different interactive actions during the process of using the virtual reality device 500. When the user inputs a control instruction for playing the screenshot picture, the virtual reality device 500 may play the screenshot picture in response to the control instruction. The virtual reality device 500 skips to the playing interface after the user inputs the control instruction; or, in the playing interface, jumping from a state of displaying one screenshot picture to a state of displaying another screenshot picture. Therefore, the control command input by the user may be an interactive action input on another interface or an interactive action input on the playback interface.
For example, the user may click an icon of a screenshot picture file in the file list to trigger the virtual reality device 500 to play the selected picture, and at this time, the click picture file icon input by the user serves as a control instruction for playing the screenshot picture. Since the virtual reality device 500 may also present UI interaction controls such as a picture list, "previous", "next", and the like on the play interface when displaying the play interface, when the user displays a screenshot picture in the play interface, by clicking the "previous", "next" UI interaction control, or clicking any picture icon in the picture list, the display may be switched to displaying an adjacent or selected picture. At this time, the action of clicking the interactive control or the picture icon, which is input by the user, is a control instruction for playing the screenshot picture.
In addition, the control instruction for playing the screenshot picture can be automatically input by the controller by judging the working state of the virtual reality device 500. For example, when the user performs a screen capture using the virtual reality device 500, the virtual reality device 500 may display the screen capture result after completing the screen capture. Accordingly, the virtual reality device 500 may automatically input a control instruction for playing a screenshot picture when the screenshot result is presented.
S2: and responding to the control instruction, and analyzing the screen capture picture file to be played to obtain a screen capture image picture and a field angle.
After obtaining the control instruction input by the user, the virtual reality device 500 may, in response to the control instruction input by the user, parse the screenshot picture file to be played, including but not limited to decompress, read pixel information, and the like, to obtain the screenshot image picture information. In the process of analyzing the screenshot picture file, the virtual reality device 500 may further read rendering scene information corresponding to the screenshot picture, so as to determine the field angle through the rendering scene information.
The picture content viewed by the user on the virtual reality device 500 is displayed in a sphere with Camera as the center and R as the radius in the rendered scene. The range that can be seen by the user is related to a field angle of view (FOV) of the virtual reality device 500, and the range between the boundaries of the FOV is the range that can be seen by the user.
It should be noted that, in this embodiment, when the screenshot picture includes the rendered scene picture, the screenshot picture may be displayed by the screenshot picture playing method, and when the screenshot picture does not include the rendered scene picture, the screenshot picture does not need to be displayed according to the screenshot picture playing method. Therefore, when the screenshot picture file to be played is analyzed, whether the screenshot picture has the rendered scene picture can be detected. For example, the source of the screenshot file can be determined by reading the description information of the screenshot file, and when the screenshot file is read and generated by the virtual reality device 500, the shape of the effective region in the screen can be further detected. And when the picture is detected not to be subjected to distortion processing, determining that the current screenshot picture is obtained by rendering texture data in a scene, possibly rendering a scene picture, and analyzing the screenshot picture according to the mode to obtain data such as a screenshot image picture and a view angle.
S3: and calling a player model according to the field angle.
After reading the angle of view, the virtual reality device 500 may also invoke a player model according to the read angle of view. The player model is used for displaying the screen capture image picture content in the rendering scene, so that the player model is a curved surface model which is established in advance according to the field angle in the rendering scene.
Since the screen that the user can see through the virtual reality device 500 is a spherical area at the angle of view, the player model can be set to a spherical shape for a better immersive experience. For different angles of view, the spherical range of the player model is also different, for example, the spherical shape range under the wide-angle virtual reality device is larger than that under the conventional virtual reality device.
In the virtual reality apparatus 500, a player model suitable for a corresponding angle of view, that is, a spherical model provided with a plurality of different specifications may be stored in advance. After the virtual reality device 500 reads the current field angle, matching can be performed in the model list according to the value of the field angle, and the player model hit by matching is extracted to be added to the rendering scene for displaying the screen capture image picture.
S4: mapping the screen shot image to the player model.
After the player model is called, the virtual reality device 500 may further map the screen capture image to the player model, that is, the screen capture image is displayed on the player model according to the position relationship of the pixel points in the image. Since the screenshot picture to be played is a planar picture and the player model display area is a curved surface, the virtual reality device 500 may merge or add pixel points in the screenshot picture, thereby displaying the planar picture in the curved surface area.
In the embodiment of the present application, by mapping the screen capture image picture to the player model, the screen capture image picture can be displayed on the player model after the mapping is completed, as shown in fig. 13. Because the spherical area corresponding to the player model can cover the field angle, the screenshot image picture can fill the user field of vision through the player model, namely the screenshot image picture replaces a sky box picture in a traditional browsing interface, the overlapping display of a rendered scene picture and the sky box picture in the screenshot picture is relieved, and better immersion experience is brought.
In the above embodiment, the virtual reality device 500 may obtain the screen capture image picture and the angle of view by parsing the screen capture picture file, so as to call the player model according to the angle of view. To enable a viewing angle to be obtained in the screenshot picture file, in some embodiments, the virtual reality device 500 may also receive user-entered instructions for screenshot.
The user may input the screen capture instruction in different ways according to rules set in the operating system of the virtual reality device 500. For example, the user can call out the shortcut center window by double-clicking the home key on the handle, click the "screen capture" option in the shortcut center window, and input a screen capture instruction.
The user can also input a screen capturing instruction through shortcut key operation. The shortcut key may be set according to the setting condition of the physical key on the virtual reality device 500, that is, different virtual reality devices 500 may be provided with different shortcut key combinations for implementing the screen capture operation. For example, the user may input a screen capture command in the form of a combination key of "power key" and "volume +". For the virtual reality device 500 externally connected with interactive devices such as a handle, a user can input a screen capturing instruction through the combination of the handle keys and the virtual reality device 500 keys.
For the partial virtual reality device 500, the user may also complete the input of the control instruction by means of other interactive devices or interactive systems. For example, a smart voice system may be built into the virtual reality device 500, and the user may input voice information such as "screen shot", "i want to leave the current screen", etc. through an audio input device such as a microphone. The intelligent voice system recognizes the meaning of the voice information by converting, analyzing, processing and the like the voice information of the user, and generates a control instruction according to the recognition result to control the virtual reality device 500 to execute the screen capturing operation.
After receiving a screen capture instruction input by a user, the virtual reality device 500 may acquire a screen capture image from the rendered scene in response to the screen capture instruction, that is, by performing image capture on the rendered scene, thereby obtaining the screen capture image. The rendering scene refers to a virtual scene constructed by a rendering engine of the virtual reality device 500 through a rendering program. For example, the virtual reality device 500 based on the unity3D rendering engine may construct a unity3D scene when rendering a display screen. In a unity3D scene, various virtual objects and functional controls may be added to render a particular usage scene. For example, when playing a multimedia asset, a display panel can be added to the unity3D scene, and the display panel is used for presenting a multimedia asset picture. Meanwhile, virtual object models such as seats, sound equipment and characters can be added in the unity3D scene, and therefore the cinema effect is created.
The virtual reality apparatus 500 may also set a virtual camera in the unity3D scene in order to output the rendered screen. For example, the virtual reality apparatus 500 may set a left-eye camera and a right-eye camera in the unity3D scene according to the positional relationship of the two eyes of the user, and the two virtual cameras may simultaneously capture an object in the unity3D scene, so as to output rendered pictures to the left display and the right display, respectively. For a better immersive experience, the angles of the two virtual cameras in the unity3D scene may be adjusted in real-time with the pose sensor of the virtual reality device 500, so that rendered pictures in the unity3D scene at different viewing angles may be output in real-time as the user acts wearing the virtual reality device 500.
Thus, the virtual reality apparatus 500 may output TargetTexture of a Camera (Camera) set in a rendering scene into RenderTexture, and image-capture virtual objects and asset pictures within the rendering scene, thereby directly acquiring images within a key region from the rendering scene.
In order to capture an image of the key area, the virtual reality device 500 may obtain a screenshot image directly through a virtual display camera in the rendered scene, or may obtain the screenshot image through a specially-made virtual screenshot camera. That is, a virtual screen capture camera may be set in the unity3D scene, and the virtual screen capture camera is set to capture images of only the key area. After the user inputs a screen capture instruction, the virtual screen capture camera can perform image capture and output a target image. Because the acquired image is an image directly generated by each virtual control or object in the rendered scene and is not subjected to processing such as distortion, the acquired target image is not affected by distortion, and the picture in the rendered scene can be well reserved.
In the process of obtaining the screen capture image picture, the virtual reality device 500 may also record position information when image capturing is performed, including information such as a capturing position and a viewing angle when image capturing is performed. For example, when taking an image using a screen capture camera, the position of the screen capture camera in the currently rendered scene may be recorded while the angle of view of the screen capture camera is recorded. Since the recorded position information can be used for analyzing when the screenshot picture file is played, so as to restore the virtual rendering scene, in order to obtain a better restoration effect, the position information of the virtual object in the rendering scene can be recorded, such as recording the position of the display panel.
After recording the field angle, the virtual reality device 500 may save the screenshot image. Since the image obtained by image capturing is a kind of pixel information, a picture file converted into a specific format is required to be read, called, and transmitted by the virtual reality device 500 or other devices in subsequent use. Therefore, when saving the screenshot image picture, the virtual reality device 500 may encode, compress, etc. the screenshot image picture to generate a picture file. The generated picture file may be saved in a file saving path designated in the memory of the virtual reality device 500 or sent to another device via a communication connection. In the process of storing the screenshot picture file, the recorded position information can be integrated into the picture file so as to be analyzed and obtained in the subsequent playing process.
As can be seen, in the above embodiment, after the screen capture operation instruction is obtained, the virtual reality device 500 may obtain an undistorted screen capture image frame by performing image capture in the rendered scene, so as to generate a screen capture picture file. The screen capture mode can avoid the influence of distortion processing on the picture content, and can enable the picture file obtained by screen capture to carry the position information such as the angle of view, the shooting position and the like, thereby being convenient for analysis when the file is played.
In the above embodiment, the screenshot image may be presented by a pre-created player model conforming to the current field angle, but since the screenshot image may originate from a different virtual reality device, the different virtual reality device may have a different field angle, that is, in the virtual reality device 500 used by the user, the player model corresponding to the field angle may not be pre-stored. To this end, as shown in fig. 14, in some embodiments, in the step of calling the player model according to the field angle, the controller is further configured to:
s310: acquiring a player model list;
s320: matching player models in the list of player models using the field angle;
s330: if the player models under the field angle are matched in the player model list, calling the matched and hit player models;
s340: and if the player model under the view angle is not matched in the player model list, creating a player model according to the view angle.
The virtual reality device 500 may match the player model according to the read angle of view after reading the angle of view. In the matching process, the virtual reality device 500 may first obtain the player model list, and then complete model matching through the player model list. A plurality of entries may be included in the player model list, and each entry records a storage address of one player model in the current virtual reality device 500 and a corresponding viewing angle thereof.
And the controller matches each table entry in the player model list according to the angle of view, and when the angle of view value recorded in any table entry is equal to the read angle of view value, the table entry represents a player model matched with the angle of view in the player model list, so that the model can be called and loaded into a rendering scene. And when no entry with the same value as the viewing angle exists in the player model list, determining that the player model under the viewing angle is not matched in the player model list, and at this time, in order to continue playing the current screenshot picture, the virtual reality device may create a new player model according to the current viewing angle and the screenshot picture.
The player model may be created by the virtual reality device 500 through a rendering engine, or may be created by a third-party application. For example, tools for modeling include, but are not limited to: maya software, 3dsmax software, ZBrush software, Headus UVLayout software, BodyPaint 3D software, PS software, Vray renderer software, and the like. The model creation can be completed by one third-party application or by cooperation of a plurality of third-party applications. After the model is successfully created, the model is imported into unity development engineering and added into a rendering scene, so that the screenshot is presented in the whole view of a user, and the same experience effect as that of the screenshot scene is obtained.
In some embodiments, as shown in fig. 15, in the step of creating a player model according to the field angle, the controller is further configured to:
s341: acquiring the resolution of a screenshot picture to be played;
s342: calculating the radius of a curved surface according to the resolution and the field angle;
s343: creating a spherical coordinate system in a rendering scene according to the curved surface radius;
s344: and intercepting the curved surface range under the field angle in the spherical coordinate system to generate a player model.
Since the player model is a part of a sphere, the player model can be represented by a spherical coordinate system when the player model is created. The spherical coordinate system is one of three-dimensional coordinate systems, is used for determining the positions of a midpoint, a line, a plane and a body in a three-dimensional space, takes a coordinate origin as a reference point, and consists of an azimuth angle, an elevation angle and a distance.
As shown in FIG. 16, in the spherical coordinate system, the distance from the spherical point to the origin can be represented by ρ, and the range of ρ is 0 ≦ ρ ≦ + ∞. On the plane ρ z, the angle of deflection from the positive z-axis to ρ is
Figure BDA0002979975300000121
Figure BDA0002979975300000122
Is in the value range of
Figure BDA0002979975300000123
The angle from the x axis to the plane is theta, the value range of theta is more than or equal to 0 and less than or equal to 2 pi, and a player model can be generated by defining the curved surface range under the field angle in the spherical coordinate system.
Due to the fact that
Figure BDA0002979975300000127
In the sphereThe maximum value is 180 degrees, when the visible angle of Camera is FOV, the visible range in the vertical direction is FOV, and in a spherical coordinate system
Figure BDA0002979975300000128
Has a starting value of (180-FOV)/2 of 90-FOV/2,
Figure BDA0002979975300000124
the end value of (a) is 90-FOV/2+ FOV is 90+ FOV/2; namely, it is
Figure BDA0002979975300000125
The range of the vertical area of (90-FOV/2, 90+ FOV/2). Converting the range into a radian representation, then
Figure BDA0002979975300000126
The value of (A) is in the range of (90-FOV/2)/180 to (90+ FOV/2)/180.
Similarly, if the maximum value of θ on the spherical surface is 360 °, and the visible range in the horizontal direction is FOV, the starting value of θ is (360-FOV)/2 is 180-FOV/2, and the ending value of θ is 180-FOV/2+ FOV is 180+ FOV/2; i.e., the horizontal region of theta ranges from 180-FOV/2 to 180+ FOV/2. When this range is converted to radian representation, θ takes a value in the range of (180-FOV/2)/180 to (180+ FOV/2)/180.
Based on the content of the spherical coordinate system, the virtual reality device 500 may determine the size of the current picture and the state information that is most suitable for the current picture to be displayed, such as the space shape and the viewing distance suitable for displaying, by reading the resolution of the screenshot picture to be played when the player model corresponding to the current field angle is not matched. And then, calculating the radius of the curved surface according to the resolution and the field angle, namely determining rho in the spherical coordinate system, and thus creating the spherical coordinate system in the rendering scene according to the calculated radius of the curved surface.
The virtual reality device 500 then intercepts the curved surface range in the spherical coordinate system according to the value range corresponding to the field angle to generate the player model. After a spherical coordinate system is created, intercepting a region range with the vertical direction within (90-FOV/2, 90+ FOV/2) in the spherical coordinate system; the horizontal direction is within the range of the region within (180-FOV/2, 180+ FOV/2). After generating the player model, the virtual reality device 500 may load the generated player model into a rendering scene for rendering the screen shot image.
As can be seen, in the above embodiment, the virtual reality device 500 may call or create a player model through the field angle matching, so as to play the screenshot through the player model adapted to the current screenshot. By using the screenshot picture content to replace a sky box picture in a browsing interface, the problem of incongruity between the sky box picture and the screenshot picture content can be relieved, and the immersion experience of a user is improved.
In some embodiments, to map the screenshot image to a player model, the controller is further configured to:
s410: constructing two-dimensional texture coordinates according to a screen capture image picture, and constructing spherical texture coordinates according to the player model;
s420: setting the range of the two-dimensional texture coordinate and the range of the spherical texture coordinate;
s430: and mapping the points in the two-dimensional texture coordinate into the spherical texture coordinate according to the normal vector coordinate of each point on the spherical texture coordinate.
When the player model is successfully created, the virtual reality device 500 may display the picture obtained by the screen capture on the player model, where the screen capture picture is a planar texture shape, and the display shape on the player model is a spherical texture shape. Therefore, as shown in fig. 17, in the present embodiment, mapping the screenshot onto the playback model is to map a planar texture onto a spherical surface.
Specifically, the virtual reality device 500 may first construct a two-dimensional texture coordinate according to the captured image picture, construct a spherical texture coordinate according to the player model, and determine a corresponding relationship between the two-dimensional texture coordinate and the spherical texture coordinate by setting a range of the two-dimensional texture coordinate and the spherical texture coordinate, so as to map a point in the two-dimensional texture coordinate into the spherical texture coordinate according to a normal vector coordinate of each point on the spherical texture coordinate.
For example, as shown in fig. 18, two-dimensional texture coordinates may be represented by an outer box in the figure, whose range is (u, v) min ═ 0, 0; (u, v) max ═ 1, 1. The spherical normal vector coordinates are represented by circles in the box of the figure, and the x and y components thereof range from (x, y) min (-1, -1); (x, y) max is (1, 1). It can be seen that the two-dimensional texture coordinates are mapped to spherical coordinates, i.e. two sets of coordinates, i.e. the interval (x, y) min- (x, y) max is mapped to the interval (u, v) min- (u, v) max.
In order to improve the mapping accuracy, the virtual reality device 500 may further perform segmentation on the screenshot image according to the resolution of the screenshot image; performing texture sampling on each sub-region obtained by the cutting; and converting the coordinates of the texture sampling points in the sub-area into the coordinates of spherical points by using an arcsine function.
For example, the virtual reality device 500 may use an arcsin function y arcsin (x). The arcsine function y is defined as x (-1, 1) and the range is y (-pi/2, pi/2). The radius of the combined three-dimensional sphere is r, and the horizontal rotation angle is h ([0, 2 pi ]]) The up-and-down rotation angle is p ([ -pi/2, pi/2)]) I.e. three-dimensional coordinates of a point on a sphere
Figure BDA0002979975300000131
Figure BDA0002979975300000132
After inverse transformation, theta is obtained as arcsin (z/r),
Figure BDA0002979975300000133
therefore, when the up-down rotation angle p is corresponded to the V direction of the texture and the horizontal rotation angle h is corresponded to the U direction of the texture, the UV range is [0,1 ]. After knowing the spherical coordinates xyz and radius r, the corresponding texture coordinates of the spherical point are: u is arctan (y/x)/2/pi; v is arcsin (z/r)/pi + 0.5. According to the above result, the virtual reality device 500 may convert the two-dimensional texture coordinates into spherical coordinates, so as to display the screenshot picture on the player model.
In the above embodiment, the virtual reality apparatus 500 may present the screenshot image using the corresponding player model according to the scene in the screenshot when the screenshot is displayed. However, in practical applications, since the partial virtual reality device 500 can output various types of screen shots, for example, 2D pictures, 3D pictures, panoramic pictures, and the like can be output. For different types of pictures, the virtual reality device 500 needs to adopt different playing manners, that is, as shown in fig. 19, in some embodiments, in the step of parsing the screenshot picture file to be played, the controller is further configured to:
s210: reading the picture type of a screenshot picture to be played;
s220: if the picture type is the first type, executing the step of analyzing the screenshot picture file to be played to obtain a screenshot picture and a field angle;
s230: and if the picture type is the second type, hiding a rendering background in a playing interface so as to display the screenshot picture in a rendering scene.
Since the 2D picture is in a regular image shape in the picture type output by the virtual reality device 500 screenshot, the display can be performed according to the screenshot picture playing method described above. The 3D picture includes two parts of pictures respectively corresponding to the left eye and the right eye, and the two parts of pictures can be presented in the screenshot picture according to a specific arrangement manner, for example, the left-eye image of the left-right type 3D picture is on the left side of the picture, and the right-eye image is on the right side of the picture; the left eye image of the top-bottom type 3D picture is in the upper part of the picture, and the right eye image is in the lower part of the picture. Therefore, when playing a 3D picture, the virtual reality device 500 may display a picture in the player model including a left-eye image and a right-eye image, i.e., a problem of display error occurs.
In this embodiment, the virtual reality device 500 may read the picture type of the screenshot when analyzing the screenshot to be played. The screen capture picture types include a first type and a second type, the first type may be a regular image such as a 2D picture, and the second type is a picture with a plurality of picture portions such as a left-eye image and a right-eye image in the picture such as a 3D picture. By reading the picture type of the screenshot picture, if the picture type is the first type, the screenshot picture file to be played can be analyzed according to the manner in the above embodiment to obtain the screenshot image picture and the field angle.
If the picture type is the second type, the screenshot picture file has a 3D effect, so that the screenshot picture can be directly displayed in the rendered scene, and the immersive experience is brought to the user. Obviously, in the process of displaying the screenshot picture, the left-eye image in the 3D picture can be output to the left display through setting; and outputting the right eye image in the 3D picture to a right display. For example, image segmentation is performed on a 3D picture, and a left-eye image and a right-eye image are separated and simultaneously presented in a rendered scene. The left eye image is set to be visible to the virtual display camera corresponding to the left display, and the right eye image is set to be visible to the virtual display camera corresponding to the right display, so that the output of image pictures in the left display and the right display is realized. When displaying the 3D picture, the virtual reality device 500 may further hide an original sky box region in the rendered scene, and only retain the screenshot picture content, so as to alleviate interference of the sky box picture on the screenshot picture.
In addition, for the panoramic picture type, since the panoramic picture type can be directly displayed on the spherical curved surface, when it is determined that the screenshot picture type to be played is a panoramic image, the virtual reality device 500 can directly display the panoramic image on the spherical surface corresponding to the player model, so as to obtain a surrounding effect.
As can be seen from the above embodiments, the virtual reality apparatus 500 may display the contents of the screen shot in the rendered scene after mapping the screen shot to the player model. To enable output of the picture content in the rendered scene to a display for display, as shown in fig. 20, in some embodiments, after the step of mapping the screen shot image picture to the player model, the controller is further configured to:
s501: adding the player model mapped with a screen capture image picture in a rendering scene;
s502: setting the position of the player model in the rendering scene so that the screen capture image picture covers the whole field angle;
s503: shooting a current rendering scene through a virtual display camera to output the screen capture image picture.
The virtual reality device 500 may add the player model with the screenshot image to the rendered scene after mapping the screenshot image onto the player model. Then, the position of the player model in the rendered scene is set, so that the screen capture image frame can cover the field angle of the current virtual reality device 500, that is, as shown in fig. 21, the screen capture image frame is full of the field of view of the user.
The virtual reality device 500 adds an interactive control to the rendering scene, such as adding UI interactive controls like a screenshot picture list, "previous", "next", and the like, so that the user can select, switch, and the like the displayed content through the UI interactive controls to control the playing process.
After all the contents of the playing interface are added to the rendering scene, the virtual reality device 500 may also shoot the current rendering scene through the virtual display camera, so as to output the display picture in real time. Two virtual display cameras can be arranged in the rendering scene of the virtual reality device 500, and the two virtual display cameras can respectively correspond to the left display and the right display of the virtual reality device 500, so that the watching mode of human eyes is simulated, and the stereoscopic effect is presented for the user. The virtual display camera can follow the gesture sensor in the virtual reality device 500 to adjust the shooting angle, so as to adjust the picture content along with the actions of the user when wearing, so that the user can obtain better immersion experience.
Based on the virtual reality device 500, in some embodiments of the present application, a method for playing a screenshot picture is further provided, where the method includes the following steps:
s1: acquiring a control instruction which is input by a user and used for playing the screen capture picture;
s2: responding to the control instruction, and analyzing the screenshot picture file to be played to obtain a screenshot image picture and a field angle;
s3: calling a player model according to the field angle, wherein the player model is a curved surface model which is built in a rendering scene in advance according to the field angle;
s4: mapping the screen shot image to the player model.
According to the technical scheme, the screenshot picture playing method provided by the embodiment can be used for analyzing the screenshot picture file after the control instruction input by the user is acquired, so that a screenshot image picture and a field angle are acquired; and calling the player model according to the field angle so as to map the screen capture image picture onto the player model. The screenshot picture playing method can display the screenshot picture by using a player model. The player model has the shape of a display area under a corresponding field angle, so that the whole field range can be filled with the screenshot picture in the playing process, and confusion of the sky box picture and the screenshot picture is relieved. And moreover, a screenshot picture scene is restored through a curved surface model adaptive to the screenshot picture, so that better immersion experience is obtained.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to display a user interface including a picture imaging region and a sky box region, the picture imaging region for presenting multimedia content; the sky box area is located around the image area and used for presenting rendering background content;
a controller configured to:
acquiring a control instruction which is input by a user and used for playing the screen capture picture;
responding to the control instruction, and analyzing the screenshot picture file to be played to obtain a screenshot image picture and a field angle;
calling a player model according to the field angle, wherein the player model is a curved surface model which is built in a rendering scene in advance according to the field angle;
mapping the screen shot image to the player model.
2. The virtual reality device of claim 1, wherein prior to the obtaining of the user-entered control instruction for playing the screenshot, the controller is further configured to:
receiving a screen capture instruction input by a user;
responding to the screen capture instruction, and performing image shooting on the current rendering scene to obtain a screen capture image picture;
recording a shooting position and a field angle in the rendered scene when image shooting is performed;
and saving the screen capture image picture and the field angle to generate a screen capture picture file.
3. The virtual reality device of claim 1, wherein in the step of invoking a player model according to the field angle, the controller is further configured to:
acquiring a player model list;
matching player models in the list of player models using the field angle;
if the player models under the field angle are matched in the player model list, calling the matched and hit player models;
and if the player model under the view angle is not matched in the player model list, creating a player model according to the view angle.
4. The virtual reality device of claim 3, wherein in the step of creating a player model according to the field angle, the controller is further configured to:
acquiring the resolution of a screenshot picture to be played;
calculating the radius of a curved surface according to the resolution and the field angle;
creating a spherical coordinate system in a rendering scene according to the curved surface radius;
and intercepting the curved surface range under the field angle in the spherical coordinate system to generate a player model.
5. The virtual reality device of claim 1, wherein in the step of mapping the screen shot image to the player model, the controller is further configured to:
constructing two-dimensional texture coordinates according to a screen capture image picture, and constructing spherical texture coordinates according to the player model;
setting the range of the two-dimensional texture coordinate and the range of the spherical texture coordinate;
and mapping the points in the two-dimensional texture coordinate into the spherical texture coordinate according to the normal vector coordinate of each point on the spherical texture coordinate.
6. The virtual reality device of claim 5, wherein in the step of mapping points in the two-dimensional texture coordinates into the spherical texture coordinates, the controller is further configured to:
according to the resolution of the screen capture image picture, performing segmentation on the screen capture image picture;
performing texture sampling on each sub-region obtained by the cutting;
and converting the coordinates of the texture sampling points in the sub-area into the coordinates of spherical points by using an arcsine function.
7. The virtual reality device of claim 1, wherein in the step of parsing the screenshot picture file to be played, the controller is further configured to:
reading the picture type of a screenshot picture to be played;
if the picture type is the first type, executing the step of analyzing the screenshot picture file to be played to obtain a screenshot picture and a field angle;
and if the picture type is the second type, hiding a rendering background in a playing interface so as to display the screenshot picture in a rendering scene.
8. The virtual reality device of claim 1, wherein after the step of mapping the screen shot image to the player model, the controller is further configured to:
adding the player model mapped with a screen capture image picture in a rendering scene;
setting the position of the player model in the rendering scene so that the screen capture image picture covers the whole field angle;
shooting a current rendering scene through a virtual display camera to output the screen capture image picture.
9. A virtual reality device, comprising:
a display configured to display a user interface including a picture imaging region and a sky box region, the picture imaging region for presenting multimedia content; the sky box area is located around the image area and used for presenting rendering background content;
a controller configured to:
receiving a screen capture instruction input by a user;
responding to the screen capture instruction, and performing image shooting on the current rendering scene to obtain a screen capture image picture;
recording a shooting position and a field angle in the rendered scene when image shooting is performed;
and saving the screen capture image picture and the field angle to generate a screen capture picture file.
10. A screenshot picture playing method is applied to virtual reality equipment, the virtual reality equipment comprises a display and a controller, and the screenshot picture playing method comprises the following steps:
acquiring a control instruction which is input by a user and used for playing the screen capture picture;
responding to the control instruction, and analyzing the screenshot picture file to be played to obtain a screenshot image picture and a field angle;
calling a player model according to the field angle, wherein the player model is a curved surface model which is built in a rendering scene in advance according to the field angle;
mapping the screen shot image to the player model.
CN202110284754.4A 2021-01-18 2021-03-17 Virtual reality equipment and screenshot picture playing method Pending CN114327700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/137060 WO2022151883A1 (en) 2021-01-18 2021-12-10 Virtual reality device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110065015 2021-01-18
CN2021100650156 2021-01-18

Publications (1)

Publication Number Publication Date
CN114327700A true CN114327700A (en) 2022-04-12

Family

ID=76561582

Family Applications (7)

Application Number Title Priority Date Filing Date
CN202110097842.3A Active CN114286142B (en) 2021-01-18 2021-01-25 Virtual reality equipment and VR scene screen capturing method
CN202110280846.5A Active CN114302214B (en) 2021-01-18 2021-03-16 Virtual reality equipment and anti-jitter screen recording method
CN202110284754.4A Pending CN114327700A (en) 2021-01-18 2021-03-17 Virtual reality equipment and screenshot picture playing method
CN202110290401.5A Active CN113064684B (en) 2021-01-18 2021-03-18 Virtual reality equipment and VR scene screen capturing method
CN202110292608.6A Pending CN114327034A (en) 2021-01-18 2021-03-18 Display device and screen recording interaction method
CN202110359636.5A Pending CN114296949A (en) 2021-01-18 2021-04-02 Virtual reality equipment and high-definition screen capturing method
CN202110980427.2A Pending CN113655887A (en) 2021-01-18 2021-08-25 Virtual reality equipment and static screen recording method

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202110097842.3A Active CN114286142B (en) 2021-01-18 2021-01-25 Virtual reality equipment and VR scene screen capturing method
CN202110280846.5A Active CN114302214B (en) 2021-01-18 2021-03-16 Virtual reality equipment and anti-jitter screen recording method

Family Applications After (4)

Application Number Title Priority Date Filing Date
CN202110290401.5A Active CN113064684B (en) 2021-01-18 2021-03-18 Virtual reality equipment and VR scene screen capturing method
CN202110292608.6A Pending CN114327034A (en) 2021-01-18 2021-03-18 Display device and screen recording interaction method
CN202110359636.5A Pending CN114296949A (en) 2021-01-18 2021-04-02 Virtual reality equipment and high-definition screen capturing method
CN202110980427.2A Pending CN113655887A (en) 2021-01-18 2021-08-25 Virtual reality equipment and static screen recording method

Country Status (1)

Country Link
CN (7) CN114286142B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185594A (en) * 2022-09-06 2022-10-14 湖北芯擎科技有限公司 Data interaction method and device based on virtual display, electronic equipment and medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117398680A (en) * 2022-07-08 2024-01-16 腾讯科技(深圳)有限公司 Virtual object display method and device, terminal equipment and storage medium
CN115942049A (en) * 2022-08-26 2023-04-07 北京博雅睿视科技有限公司 VR video-oriented visual angle switching method, device, equipment and medium
CN115665461B (en) * 2022-10-13 2024-03-22 聚好看科技股份有限公司 Video recording method and virtual reality device
CN116795316B (en) * 2023-08-24 2023-11-03 南京维赛客网络科技有限公司 Method, system and storage medium for playing pictures in scene in small window during screen projection

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0342690A (en) * 1989-07-10 1991-02-22 Konica Corp Image forming device
JP5279453B2 (en) * 2008-10-31 2013-09-04 キヤノン株式会社 Image shake correction apparatus, imaging apparatus, and image shake correction method
JP5685079B2 (en) * 2010-12-28 2015-03-18 任天堂株式会社 Image processing apparatus, image processing program, image processing method, and image processing system
US8606645B1 (en) * 2012-02-02 2013-12-10 SeeMore Interactive, Inc. Method, medium, and system for an augmented reality retail application
JP2013172418A (en) * 2012-02-22 2013-09-02 Nikon Corp Image handling apparatus and camera
CN113568506A (en) * 2013-01-15 2021-10-29 超级触觉资讯处理有限公司 Dynamic user interaction for display control and customized gesture interpretation
CN103293957A (en) * 2013-05-22 2013-09-11 上海新跃仪表厂 Satellite attitude maneuver method for performing routing planning relative to moving coordinate system
JP6461146B2 (en) * 2013-11-12 2019-01-30 ビーエルアールティー ピーティーワイ エルティーディーBlrt Pty Ltd Social media platform
JP6448218B2 (en) * 2014-05-12 2019-01-09 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, AND INFORMATION PROCESSING SYSTEM
KR20160034037A (en) * 2014-09-19 2016-03-29 삼성전자주식회사 Method for capturing a display and electronic device thereof
US10684485B2 (en) * 2015-03-06 2020-06-16 Sony Interactive Entertainment Inc. Tracking system for head mounted display
WO2017039348A1 (en) * 2015-09-01 2017-03-09 Samsung Electronics Co., Ltd. Image capturing apparatus and operating method thereof
CN105704539A (en) * 2016-02-15 2016-06-22 努比亚技术有限公司 Video sharing device and video sharing method
CN105847672A (en) * 2016-03-07 2016-08-10 乐视致新电子科技(天津)有限公司 Virtual reality helmet snapshotting method and system
WO2017156742A1 (en) * 2016-03-17 2017-09-21 深圳多哚新技术有限责任公司 Virtual reality-based image displaying method and related device
US10043302B2 (en) * 2016-04-18 2018-08-07 Beijing Pico Technology Co., Ltd. Method and apparatus for realizing boot animation of virtual reality system
CN106020482A (en) * 2016-05-30 2016-10-12 努比亚技术有限公司 Control method, virtual reality device and mobile terminal
CN105959666A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Method and device for sharing 3d image in virtual reality system
CN106201259A (en) * 2016-06-30 2016-12-07 乐视控股(北京)有限公司 A kind of method and apparatus sharing full-view image in virtual reality system
CN106843456B (en) * 2016-08-16 2018-06-29 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device based on posture tracking
CN106341603A (en) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 View finding method for virtual reality environment, device and virtual reality device
KR102612988B1 (en) * 2016-10-20 2023-12-12 삼성전자주식회사 Display apparatus and image processing method thereof
CN112132881A (en) * 2016-12-12 2020-12-25 华为技术有限公司 Method and equipment for acquiring dynamic three-dimensional image
US20180189980A1 (en) * 2017-01-03 2018-07-05 Black Sails Technology Inc. Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting
KR102434497B1 (en) * 2017-02-03 2022-08-18 워너 브로스. 엔터테인먼트 인크. Rendering of extended video in virtual reality
CN109952757B (en) * 2017-08-24 2020-06-05 腾讯科技(深圳)有限公司 Method for recording video based on virtual reality application, terminal equipment and storage medium
CN107678539A (en) * 2017-09-07 2018-02-09 歌尔科技有限公司 For wearing the display methods of display device and wearing display device
CN107590848A (en) * 2017-09-29 2018-01-16 北京金山安全软件有限公司 Picture generation method and device, electronic equipment and storage medium
CN108024079B (en) * 2017-11-29 2021-08-03 Oppo广东移动通信有限公司 Screen recording method, device, terminal and storage medium
CN108073346A (en) * 2017-11-30 2018-05-25 深圳市金立通信设备有限公司 A kind of record screen method, terminal and computer readable storage medium
CN107957836B (en) * 2017-12-05 2020-12-29 Oppo广东移动通信有限公司 Screen recording method and device and terminal
CN108289220B (en) * 2018-01-15 2020-11-27 深圳市奥拓电子股份有限公司 Virtual image processing method, image processing system, and storage medium
CN108733070A (en) * 2018-04-11 2018-11-02 广州亿航智能技术有限公司 Unmanned aerial vehicle (UAV) control method and control system
CN108682036B (en) * 2018-04-27 2022-10-25 腾讯科技(深圳)有限公司 Pose determination method, pose determination device and storage medium
CN109002248B (en) * 2018-08-31 2021-07-20 歌尔光学科技有限公司 VR scene screenshot method, equipment and storage medium
US10569164B1 (en) * 2018-09-26 2020-02-25 Valve Corporation Augmented reality (AR) system for providing AR in video games
CN109523462A (en) * 2018-11-14 2019-03-26 北京奇艺世纪科技有限公司 A kind of acquisition methods and device of VR video screenshotss image
TWI700000B (en) * 2019-01-29 2020-07-21 威盛電子股份有限公司 Image stabilization method and apparatus for panoramic video, and method for evaluating image stabilization algorithm
CN110087123B (en) * 2019-05-15 2022-07-22 腾讯科技(深圳)有限公司 Video file production method, device, equipment and readable storage medium
CN110221795B (en) * 2019-05-27 2021-10-22 维沃移动通信有限公司 Screen recording method and terminal
CN110304270B (en) * 2019-06-03 2021-01-05 宁波天擎航天科技有限公司 Omnibearing launch control method and device for carrier rocket and computer equipment
CN110505471B (en) * 2019-07-29 2021-09-14 青岛小鸟看看科技有限公司 Head-mounted display equipment and screen acquisition method and device thereof
CN110874168A (en) * 2019-09-30 2020-03-10 华为技术有限公司 Display method and electronic equipment
CN110975277B (en) * 2019-12-18 2024-01-12 网易(杭州)网络有限公司 Information processing method and device in augmented reality game, medium and electronic equipment
CN112188087B (en) * 2020-09-10 2021-12-03 北京为快科技有限公司 Panoramic video screenshot method and device, storage medium and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185594A (en) * 2022-09-06 2022-10-14 湖北芯擎科技有限公司 Data interaction method and device based on virtual display, electronic equipment and medium
CN115185594B (en) * 2022-09-06 2023-01-06 湖北芯擎科技有限公司 Data interaction method and device based on virtual display, electronic equipment and medium

Also Published As

Publication number Publication date
CN113655887A (en) 2021-11-16
CN114302214B (en) 2023-04-18
CN114286142A (en) 2022-04-05
CN114302214A (en) 2022-04-08
CN113064684B (en) 2023-03-21
CN114296949A (en) 2022-04-08
CN114286142B (en) 2023-03-28
CN114327034A (en) 2022-04-12
CN113064684A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN110636353B (en) Display device
CN113064684B (en) Virtual reality equipment and VR scene screen capturing method
TWI530157B (en) Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN112732089A (en) Virtual reality equipment and quick interaction method
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
JP2019512177A (en) Device and related method
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
WO2020206647A1 (en) Method and apparatus for controlling, by means of following motion of user, playing of video content
CN112929750B (en) Camera adjusting method and display device
KR101773891B1 (en) System and Computer Implemented Method for Playing Compoiste Video through Selection of Environment Object in Real Time Manner
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
WO2022151883A1 (en) Virtual reality device
WO2022193931A1 (en) Virtual reality device and media resource playback method
WO2022151882A1 (en) Virtual reality device
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN114286077A (en) Virtual reality equipment and VR scene image display method
WO2020248682A1 (en) Display device and virtual scene generation method
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
CN112732088B (en) Virtual reality equipment and monocular screen capturing method
KR101843024B1 (en) System and Computer Implemented Method for Playing Compoiste Video through Selection of Environment Object in Real Time Manner
WO2022111005A1 (en) Virtual reality (vr) device and vr scenario image recognition method
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN114283055A (en) Virtual reality equipment and picture display method
CN114327032A (en) Virtual reality equipment and VR (virtual reality) picture display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination