WO2022151864A1 - 虚拟现实设备 - Google Patents

虚拟现实设备 Download PDF

Info

Publication number
WO2022151864A1
WO2022151864A1 PCT/CN2021/135509 CN2021135509W WO2022151864A1 WO 2022151864 A1 WO2022151864 A1 WO 2022151864A1 CN 2021135509 W CN2021135509 W CN 2021135509W WO 2022151864 A1 WO2022151864 A1 WO 2022151864A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen recording
user
virtual reality
screen
reality device
Prior art date
Application number
PCT/CN2021/135509
Other languages
English (en)
French (fr)
Inventor
孟亚州
卢可敬
王大勇
姜璐珩
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110065120.XA external-priority patent/CN112732089A/zh
Priority claimed from CN202110280846.5A external-priority patent/CN114302214B/zh
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2022151864A1 publication Critical patent/WO2022151864A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present application relates to the field of virtual reality technology, in particular to virtual reality equipment.
  • Virtual Reality (VR) technology is a display technology that simulates a virtual environment through a computer, thereby giving people a sense of immersion in the environment.
  • a virtual reality device is a device that uses virtual display technology to present virtual images to users.
  • a virtual reality device includes two display screens for presenting virtual picture content, corresponding to the left and right eyes of the user respectively. When the contents displayed on the two display screens come from images of the same object from different viewing angles, a three-dimensional viewing experience can be brought to the user.
  • the virtual reality device can save the content displayed within a period of time in the form of video by performing a screen recording operation for subsequent viewing or sending it to other devices for playback.
  • a virtual reality device performs a screen recording operation
  • the content displayed on the screen is directly captured at a specific frame rate, and is arranged in chronological order to form a video file.
  • the virtual reality device may display a screen recording control interface, so that the user can perform screen recording control through the screen recording control interface during the screen recording process, such as start/stop recording, etc.
  • these operations need to be completed through the screen recording control interface, so the screen needs to jump to display the screen recording control interface during screen recording control, resulting in the recorded video file containing the screen recording control interface.
  • the screen recording control interface and click the stop screen recording button, and the screen recording control interface and the click action that are called up will block the screen content that the user wants to record, reducing the user experience.
  • the virtual reality device includes: a display, a posture sensor, and a controller.
  • the display is configured to display a user interface;
  • the gesture sensor is configured to detect user gesture data in real time;
  • the controller is configured to perform the following program steps:
  • a screen recording image is taken in the rendering scene, so as to output a screen recording image with a stable shooting angle when the attitude change is less than the preset jitter threshold.
  • the anti-shake screen recording method further provided by the first aspect of the present application includes the following steps:
  • a screen recording image is taken in the rendering scene, so as to output a screen recording image with a stable shooting angle when the attitude change is less than the preset jitter threshold.
  • the virtual reality device and the anti-shake screen recording method provided by the first aspect of the present application can perform smoothing processing on the user gesture data after the user controls the screen recording to start, and according to the smoothed user gesture data, Take a screencast image in the rendered scene.
  • the method can filter the changes of the user's attitude data caused by subtle swings through a filtering operation, so that when the attitude changes are less than a preset shake threshold, a screen recording image with a stable shooting angle is output, so as to alleviate the impact caused by the shake during screen recording.
  • the virtual reality device includes: a display, a posture sensor, and a controller.
  • the display is configured to display a user interface;
  • the gesture sensor is configured to detect user gesture data in real time;
  • the controller is configured to perform the following program steps:
  • a screen recording image is taken from the rendering scene according to the attitude change amount, so as to output a screen recording image with a stable shooting angle when the attitude change amount is less than a preset shaking threshold.
  • the anti-shake screen recording method further provided by the second aspect of the present application includes the following steps:
  • a screen recording image is taken from the rendering scene according to the attitude change amount, so as to output a screen recording image with a stable shooting angle when the attitude change amount is less than a preset shaking threshold.
  • the display device includes: a display and a controller, wherein the display is configured to display a user interface; the controller is configured to execute the following program steps:
  • the screen recording control interface including at least an option to end recording and an option to continue recording;
  • a screen recording interaction method provided by the present application is applied to the above-mentioned display device, and the screen recording interaction method includes the following steps:
  • FIG. 1 is a schematic structural diagram of a display system including a virtual reality device in an embodiment of the application
  • FIG. 2 is a schematic diagram of a VR scene global interface in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a recommended content area of a global interface in an embodiment of the present application.
  • 5 is a schematic diagram of entering the shortcut center by pressing a button in the embodiment of the application.
  • FIG. 6 is a schematic diagram of an interface in a screen recording process in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a screen recording interaction flow diagram of a virtual reality device in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a display flow chart of a screen recording control interface when an interactive command is executed in an embodiment of the present application
  • FIG. 9 is a schematic flowchart of executing a stop screen recording instruction in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the display flow chart of the screen recording control interface when the exit instruction is executed in the embodiment of the application;
  • FIG. 11 is a schematic flowchart of detecting the running state of a screen recording service in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a screen-recording control interface that wakes up before screen-recording in an embodiment of the application;
  • FIG. 13 is a schematic diagram of a display interface in screen recording according to an embodiment of the application.
  • FIG. 14 is a schematic diagram of a wake-up screen recording control interface in a screen recording process in an embodiment of the present application
  • 15 is a schematic diagram of an interface when the screen recording operation is ended in an embodiment of the present application.
  • 16 is a schematic flowchart of a media asset playback interface wake-up screen recording control interface in an embodiment of the application
  • 17 is a schematic flowchart of an anti-shake screen recording method in an embodiment of the present application.
  • FIG. 18 is a schematic flowchart of obtaining a screen recording image through a virtual screen recording camera in an embodiment of the present application
  • 19 is a schematic flowchart of setting a shooting angle of a virtual screen recording in a rendering scene according to an embodiment of the present application
  • 21 is a schematic flowchart of another anti-shake screen recording method in an embodiment of the application.
  • 22 is a schematic flowchart of a static screen recording method in an embodiment of the application.
  • FIG. 23 is a schematic diagram of a time sequence relationship of a static screen recording method in an embodiment of the application.
  • 24 is a schematic diagram of a screen recording setting interface in an embodiment of the application.
  • FIG. 25 is a schematic flowchart of setting the viewing angle direction of a screen recording according to a screen recording method in an embodiment of the present application.
  • FIG. 26 is a schematic diagram of a flowchart of determining an action judgment amount by accumulating the number of frames in an embodiment of the present application
  • FIG. 27 is a schematic flowchart of adjusting the screen recording direction according to a count variable in an embodiment of the present application
  • FIG. 28 is a schematic flowchart of extracting multiple frames of user gesture data at intervals according to an embodiment of the present application.
  • module refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic or combination of hardware or/and software code capable of performing the function associated with that element.
  • the virtual reality device 500 is used as an example to illustrate the screen recording interaction method. It should be understood that the screen recording interaction method provided in this application can also be applied to other display devices 200.
  • display devices 200 may be a smart TV, a smart terminal, a personal computer, and the like.
  • the virtual reality device 500 generally refers to a display device that can be worn on the user's head to provide the user with an immersive experience, including but not limited to VR glasses, augmented reality (AR), VR games devices, mobile computing devices, and other wearable computers.
  • VR glasses augmented reality
  • AR augmented reality
  • VR games devices mobile computing devices
  • mobile computing devices and other wearable computers.
  • Some embodiments of the present application describe the technical solution by taking VR glasses as an example. It should be understood that the provided technical solution can be applied to other types of virtual reality devices at the same time.
  • the virtual reality device 500 can run independently, or be connected to other smart display devices as an external device, where the display device can be a smart TV, a computer, a tablet computer, a server, or the like.
  • the virtual reality device 500 can display a media image to provide a close-up image for the user's eyes, so as to bring an immersive experience.
  • the virtual reality device 500 may include a number of components for display and head-wearing.
  • the virtual reality device 500 may include, but is not limited to, at least one of a casing, a position fixing member, an optical system, a display assembly, a posture detection circuit, an interface circuit, and the like.
  • the optical system, the display component, the attitude detection circuit and the interface circuit can be arranged in the casing to present a specific display screen; the two sides of the casing are connected with position fixings to be worn on the user's head.
  • the attitude detection circuit When in use, the attitude detection circuit has built-in attitude detection elements such as gravitational acceleration sensor and gyroscope. When the user's head moves or rotates, the user's attitude can be detected, and the detected attitude data can be transmitted to the controller, etc.
  • the processing element enables the processing element to adjust the specific screen content in the display assembly according to the detected gesture data.
  • the virtual reality device 500 shown in FIG. 1 can be connected to the display device 200 , and a network-based display system is constructed between the virtual reality device 500 , the display device 200 and the server 400 between the virtual reality device 500 , the display device 200 and the server 400 Real-time data interaction can be performed between the two devices.
  • the display device 200 can obtain media asset data from the server 400 and play it, and transmit the specific screen content to the virtual reality device 500 for display.
  • the display component of the virtual reality device 500 includes a display screen and a driving circuit related to the display screen.
  • the display component may include two display screens, corresponding to the user's left eye and right eye respectively.
  • the content of the images displayed on the left and right screens will be slightly different, and the left and right cameras of the 3D source during the shooting process can be displayed respectively. Due to the screen content observed by the user's left and right eyes, a display screen with a strong three-dimensional effect can be observed when wearing the device.
  • the optical system in the virtual reality device 500 is an optical module composed of multiple lenses.
  • the optical system is set between the user's eyes and the display screen, which can increase the optical path through the refraction of the optical signal by the lens and the polarization effect of the polarizer on the lens, so that the content displayed by the display component can be clearly displayed in the user's field of vision.
  • the optical system also supports focusing, that is, adjusting the position of one or more of the multiple lenses through the focusing component, changing the mutual distance between the multiple lenses, and thus changing the optical path. Adjust the picture sharpness.
  • the interface circuit of the virtual reality device 500 can be used to transmit interactive data.
  • the virtual reality device 500 can also be connected to other display devices or peripherals through the interface circuit to pass and Data exchange between connected devices to achieve more complex functions.
  • the virtual reality device 500 may be connected to a display device through an interface circuit, so as to output the displayed picture to the display device in real time for display.
  • the virtual reality device 500 may also be connected to a handle through an interface circuit, and the handle may be operated by the user by hand, so as to perform related operations in the VR user interface.
  • the VR user interface can be presented as a variety of different types of UI layouts according to user operations.
  • the user interface may include a global interface, and the global UI after the AR/VR terminal is started is shown in FIG. 2 , and the global UI can be displayed on the display screen of the AR/VR terminal or on the display of the display device. middle.
  • the global UI may include at least one of a recommended content area 1 , a business classification extension area 2 , an application shortcut operation entry area 3 , and a suspended object area 4 .
  • Recommended content area 1 is used to configure TAB columns of different categories; in the columns, you can choose to configure media resources, topics, etc.; the media resources can include 2D film and television, education courses, travel, 3D, 360-degree panorama, live broadcast, 4K film and television , program applications, games, travel and other businesses with media content, and the column can choose different template styles, and can support simultaneous recommendation and arrangement of media resources and themes, as shown in Figure 3.
  • a status bar may also be provided at the top of the recommended content area 1, and a plurality of display controls may be provided in the status bar, including common options such as time, network connection status, and battery level.
  • the content included in the status bar can be customized by the user, for example, content such as weather and user avatar can be added.
  • the content contained in the status bar can be selected by the user to perform the corresponding function. For example, when the user clicks the time option, the virtual reality device 500 may display the time device window in the current interface, or jump to the calendar interface. When the user clicks the network connection status option, the virtual reality device 500 may display the WiFi list in the current interface, or jump to the network setting interface.
  • the content displayed in the status bar can be presented in different content forms according to the setting state of a specific item.
  • the time control can be directly displayed as specific time text information, and different texts are displayed at different times;
  • the power control can be displayed in different patterns according to the current remaining power status of the virtual reality device 500 .
  • the status bar is used to enable the user to perform common control operations, so as to quickly set the virtual reality device 500 . Since the setting procedure for the virtual reality device 500 includes many items, it is usually not possible to display all the commonly used setting options in the status bar. To this end, in some embodiments, the status bar may also be provided with extended options. After the extension option is selected, an extension window may be presented in the current interface, and a plurality of setting options may be further set in the extension window for implementing other functions of the virtual reality device 500 .
  • a "shortcut center” option may be set in the extension window.
  • the virtual reality device 500 may display the shortcut center window.
  • the shortcut center window may include at least one of the options of "screen capture”, “screen recording” and “screen projection”, which are used to wake up the corresponding functions respectively.
  • the business classification extension area 2 supports the configuration of extended classifications of different classifications. If there is a new business type, you can configure an independent TAB to display the corresponding page content.
  • the expansion classification in the business classification expansion area 2 can also be sorted and adjusted and offline business operations can be performed.
  • the business classification expands the content that the area 2 can include: film and television, education, travel, application, mine.
  • the service classification extension area 2 is configured to display a large service classification TAB, and supports configuration of more classifications, and its icons support configuration, as shown in FIG. 3 .
  • the application shortcut operation entry area 3 can designate pre-installed applications to be displayed first for operation recommendation, and supports configuring special icon styles to replace default icons, and multiple pre-installed applications can be designated.
  • the application shortcut operation entry area 3 further includes a leftward movement control and a rightward movement control for moving the option target, for selecting different icons.
  • the interaction can be performed through peripheral devices, for example, the handle of the AR/VR terminal can operate the user interface of the AR/VR terminal, including the back button; the home button, and its long press can realize the reset function; the volume Addition and subtraction buttons; touch area, the touch area can realize the functions of clicking, sliding, pressing and dragging the focus.
  • the handle of the AR/VR terminal can operate the user interface of the AR/VR terminal, including the back button; the home button, and its long press can realize the reset function; the volume Addition and subtraction buttons; touch area, the touch area can realize the functions of clicking, sliding, pressing and dragging the focus.
  • Users can perform interactive operations through the global UI interface, and jump to a specific interface in partial interaction mode. For example, in order to play the media asset data, the user can click any media asset link icon in the global UI interface to start playing the media asset file corresponding to the media asset link. At this time, the virtual reality device 500 can control the jump to the media playback interface.
  • the virtual reality device 500 may also display a status bar at the top of the playing interface, and perform corresponding setting functions according to the set interaction mode. For example, as shown in FIG. 4 , when the virtual reality device 500 is playing video media assets, if the user wants to perform a screen recording operation on the media asset screen, he can click the extension option on the status bar to call up the extension window, and in the extension Click the shortcut center option in the window to make the virtual reality device 500 display the shortcut center window on the playback interface, and finally click the "screen recording" option in the extension center window to make the virtual reality device 500 perform a screen recording operation, and record a segment after the current moment.
  • the pictures displayed in time are stored in video mode.
  • the status bar can be hidden when the virtual reality device 500 plays the media asset screen, so as to avoid blocking the media asset screen.
  • the display is triggered.
  • the status bar can be hidden when the user is not using the handle to perform an action, and displayed when the user is using the handle to perform an action.
  • the virtual reality device 500 can be configured to detect the state of the orientation sensor in the handle or the state of any button when playing the media image, when it is detected that the detection value of the orientation sensor changes, or the button is pressed , you can control the display of the status bar at the top of the playback interface. When it is detected that the orientation sensor has not changed within the set time, or the button has not been pressed, the control will hide the status bar in the playback interface.
  • the user can call up the shortcut center through the status bar, so as to click the corresponding option in the shortcut center window to complete the screen capture, screen recording and screen projection operations.
  • You can also use other interactive methods to invoke the Quick Center and display the Quick Center window. For example, as shown in Figure 5, the user can invoke the shortcut center window by double-clicking the home button on the handle.
  • the corresponding function can be activated.
  • the activation mode of the corresponding function may be determined according to the actual interaction mode of the virtual reality device 500 .
  • the user can choose to perform recording only on the played media asset screen, or perform screen recording on the entire display content.
  • the virtual reality device 500 can obtain the media asset data (that is, the data obtained by parsing the video file) without rendering the 3D scene through the rendering engine, and copy it to output the recording. screen results.
  • the virtual reality device 500 can take screenshots frame by frame of the final screen displayed on the display to obtain multiple consecutive screenshot images, thereby forming a video file and outputting the screen recording result.
  • the virtual reality device 500 may display the prompt content related to the screen recording in the playback interface.
  • a permanent recording symbol can be displayed in the upper right corner of the playback interface.
  • the recording symbol can be composed of a flashable dot and a time frame.
  • the virtual reality device 500 may further display a text prompt window (toast) in the current interface, which is used to prompt the user that the screen recording is currently started or guide the user to perform an interactive operation related to the screen recording.
  • the displayed text prompt window may include text content such as "screen recording has started”, “click the screen recording button again to end recording", and the like.
  • the text prompt window can be stopped within a preset time after being displayed. For example, the toast disappears after 2s display, and the resident recording symbol is displayed at the same time, and the timing starts.
  • the user can turn on and off the screen recording function through the shortcut center interface provided by the virtual reality device 500, and can also use the shortcut center interface to control the screen recording process.
  • the shortcut center interface will block the original interface when it is displayed, and the video image obtained by the screen recording will include the image content corresponding to the shortcut center interface.
  • the virtual reality device 500 when the virtual reality device 500 performs the screen recording operation, if the user wants to end the screen recording, he needs to wake up the shortcut center interface first, click the "end screen recording" control in the shortcut center interface, and enter the end screen recording interaction instruction, At this time, the virtual reality device 500 will stop running the screenshot service and save the screenshot video file. Since the user wakes up the shortcut center interface before the end of the screen recording, the obtained screen recording video will contain the part of the screen that displays the shortcut center interface during the time period near the end of the screen recording, which will block the interface content to be recorded and reduce the user experience.
  • the virtual reality device 500 can run the screen recording interaction method to enable the screen recording
  • the obtained video file does not contain the screen when the screen recording control interface is displayed.
  • the virtual reality device 500 includes: a display and a controller. Wherein, the display is used to display the user interface and the screen recording control interface such as the shortcut center. As shown in FIG. 7 , the controller is configured to execute the following program steps:
  • the control instruction for waking up the screen recording control interface can be input according to the interaction strategy set in the operating system of the virtual reality device 500 .
  • the virtual reality device 500 may display a shortcut center button in a specific position in the interface, such as the top status bar.
  • the shortcut center button When the user clicks the shortcut center button, the virtual reality device 500 will display the shortcut center button based on the current interface. Display the shortcut center interface.
  • the input of the control instruction for waking up the screen recording control interface is completed by the above-mentioned operation action of clicking the shortcut center button.
  • the user can also complete the control command input of the wake-up screen recording control interface by means of shortcut key interaction.
  • the user can call up the shortcut center interface by double-clicking the Home button on the operating handle, that is, wake up the screen recording control interface.
  • the control instruction for waking up the screen recording control interface is input by the above-mentioned double-clicking the Home button operation.
  • the user can also complete the input of control instructions by means of an external hardware interaction device or an integrated software interaction system.
  • an intelligent voice system may be built in the virtual reality device 500, and the user may input voice information, such as "screen recording control", through an audio input device such as a microphone.
  • the intelligent voice system recognizes the meaning of the voice information by converting, analyzing, and processing the user's voice information, and generates control instructions according to the recognition results to control the virtual reality device 500 to wake up the screen recording control interface.
  • the control instruction for waking up the screen recording control interface is input through the above-mentioned voice input process.
  • the virtual reality device 500 After the user inputs the control command through any of the above interactive methods, the virtual reality device 500 will display the screen recording control interface on the basis of the current user interface, and the display of the screen recording control interface will block part of the current user interface. , so in order to reduce the interference of the screen recording control interface, in this embodiment, the virtual reality device 500 can respond to the control instruction to detect whether the virtual reality device 500 is in the screen recording process. During the screen recording process, you can pause the screen recording service.
  • the screen recording service refers to a control program or a set of control programs related to screen recording integrated in the operating system of the virtual reality device 500 .
  • the controller can realize the screen recording function by running the control program, that is, continuously output multiple consecutive frame images according to the recording frame rate. Therefore, in this embodiment, suspending the screen recording service means that after receiving the control instruction, the controller suspends the execution of the screen capture related program, and does not output continuous frame images during the suspending period.
  • the display of the screen recording control interface will not affect the screen recording process, so the screen recording control interface can be directly displayed in the user interface, so that the screen recording control interface can be used to execute the screen recording control interface. Start screen recording and other device interaction commands.
  • the virtual reality device 500 suspends running the screen recording service after receiving the control instruction, the displayed screen recording control interface will not be recorded by the screen recording service, so that the video file obtained from the screen recording does not include the pause period.
  • the display screen does not include the content related to the screen recording control interface.
  • the virtual reality device 500 may display a screen recording control interface for the user to perform interactive operations related to the screen recording control. For example, after the user double-clicks the home button of the handle to wake up the shortcut center, the virtual reality device 500 can control to suspend the screen recording service, and display the shortcut center interface after suspending the screen recording service. The user can control the virtual reality device 500 to end the screen recording by clicking the end screen recording control on the shortcut center interface, so as to save the video file obtained from the screen recording, and the saved screen recording file does not include the period of suspending the screen recording service. user interface.
  • the screen recording control interface in this embodiment is not limited to controlling the screen recording process, and other commonly used function controls may also be set in the screen recording control interface, so that the user can complete other processing during the screen recording process. That is, after the user wakes up the screen recording control interface, the virtual reality device 500 can also customize the function controls according to the settings in the screen recording control interface, and further perform other interactive operations. For example, after the shortcut center interface is displayed, the user can save the screenshot of the user interface at the current moment as a picture file by clicking the "screenshot" button on the shortcut center interface. Since the virtual reality device 500 also has corresponding image changes when performing some functions, for example, a screen capture animation will be displayed during the screen capture process. Therefore, by pausing the screen recording service, it is also possible to reduce the occlusion of other operations corresponding to the animation effects in the screen recording result. Make the video obtained from screen recording more smooth.
  • the user can also input control instructions for various functions through multiple interactive actions.
  • the control command is used to wake up the screen recording control interface
  • the screen recording is suspended according to the above method, so that the generated screen recording file does not contain the user interface during the suspension of the screen recording service; and when the control command is not used to wake up the screen recording service
  • various user interfaces can be displayed according to the interface display mode specified by the operating system, so that the generated screen recording file contains user interface content related to other interactive actions except the wake-up screen recording control interface.
  • the virtual reality device 500 can suspend the screen recording service when the user wakes up the screen recording control interface, so as to relieve the screen recording control interface from obscuring the displayed user interface, so as to prevent the video obtained from the screen recording from including the screen recording. Control the corresponding content of the interface.
  • the virtual reality device 500 can also automatically hide the screen recording control interface and continue to run the screen recording service, as shown in FIG. 8 .
  • the step of executing the interactive instruction input by the user through the screen recording control interface further includes:
  • S310 Receive a continuation recording instruction input by the user through the continuation recording option
  • S320 In response to the instruction to continue recording, control the display to hide the screen recording control interface, and resume running the screen recording service.
  • the interactive instruction input by the user on the screen recording control interface can be detected in real time, and the screen recording process can be controlled by executing the interactive instruction.
  • the user can click the "Screen Settings” button control in the screen recording control interface, and control the jump to the screen setting interface, so as to perform related screen settings such as brightness and color.
  • Users can also click the "Track Focus” button in the screen recording control interface, so that in the subsequent screen recording process, the focus cursor position can be tracked in real time through markers such as circles, so that the operation process can be more clearly displayed.
  • the virtual reality device 500 may also automatically hide the screen recording control interface, and resume running the screen recording service, so as to continue performing the screen recording operation. For example, when the user double-clicks the home button on the handle to wake up the shortcut center interface during the screen recording process, the virtual reality device 500 will suspend the screen recording service and display the shortcut center interface. The user then completes other interactive operations through the shortcut center interface, such as clicking the "screen projection" button control, and controlling the virtual reality device 500 to perform screen projection operations. After the virtual reality device 500 completes the screen projection related operations, the shortcut center interface will be automatically hidden, and the screen recording service will be resumed, so as to continue to perform the screen recording operation on the current user interface.
  • the interactive operations performed by the user through the screen recording control interface may include directly clicking controls on the screen recording control interface, and other operations related to the screen recording control interface.
  • the interactive operation of directly clicking the controls on the screen recording control interface depends on the arrangement of the controls in the screen recording control interface. For example, the user can control the virtual reality device 500 to end the screen projection by clicking the "end screencasting" button control on the screen recording control interface; and control the virtual reality device 500 to continue to run the screen recording service by exiting the screen recording control interface operation.
  • the controller in the step of executing the interactive instruction, is further configured to:
  • the user can control the virtual reality device 500 to stop screen recording by calling up the screen recording control interface and clicking the "end screen recording" button on the screen recording control interface. Therefore, in the process of displaying the screen recording control interface, the virtual reality device 500 can analyze the control action corresponding to the interaction instruction input by the user in real time. Stop recording. For the interactive instruction of such a control action, the virtual reality device 500 may stop running the screen recording service in response to the interactive instruction, and save and/or send the video content obtained from the screen recording to realize the screen recording function.
  • the "End Screen Recording” button as an interactive control in the screen recording control interface, can be integrated with the "Start Screen Recording” option.
  • the function of the "screen recording” option in the shortcut center is to start the screen recording, and during the screen recording process, the function of the "screen recording” option in the shortcut center is to end the screen recording.
  • the operation mode for the user to input the instruction for stopping the screen recording may be: during the screen recording process, the user wakes up the shortcut center by double-clicking the home button on the handle. At this time, since the screen recording service is running (in a paused state), the control displayed in the "Screen Recording" option in the shortcut center interface is the “End Screen Recording” option. Therefore, after the user clicks the "End Screen Recording” option, the Enter the command to end the screen recording.
  • the virtual reality device 500 displays the screen recording control interface
  • the virtual reality device 500 can hide the screen recording control interface and continue to run the screen recording service. That is, as shown in FIG. 10, after the step of controlling the display to display the screen recording control interface, the controller is further configured to:
  • the exit instruction refers to an instruction used to close or hide the screen recording control interface
  • the input can be completed in different interactive ways. For example, when the screen recording control interface is displayed, the user can control to exit the screen recording control interface by pressing the "return" key on the handle; and can also control and exit the screen recording control interface by clicking on an area outside the screen recording control interface.
  • the virtual reality device 500 can hide the screen recording control interface to remove the screen recording control interface from obscuring the user interface displayed at the lower layer, and then resume the running of the screen recording service, so that the displayed user interface can be controlled by the screen recording service.
  • the interface continues to perform the screen recording operation.
  • the user can control the screen recording process and other processes by inputting interactive instructions, and the virtual reality device 500 is executing the interactive instructions or the user exits the screen recording process.
  • the virtual reality device 500 After controlling the interface, continue to run the screen recording service, so that the screen recording service is automatically restored on the premise that the screen recording control interface does not block the current interface to complete the screen recording operation.
  • the user may also input a control command to wake up the screen recording control interface when the screen recording service is not running.
  • the virtual reality device 500 may skip the step of suspending the screen recording service, and directly display the screen recording control interface. That is, as shown in FIG. 11, in some embodiments, in response to the control instruction, the step of suspending the screen recording service further includes:
  • the virtual reality device 500 can detect the current running state of the screen recording service in response to the control instruction.
  • the screen recording service can be suspended according to the method in the above embodiment, and the display can be controlled to display the screen recording control interface; If it is not detected that the virtual reality device 500 is running the screen recording service, that is, the virtual reality device 500 is not currently in the screen recording process, the screen recording control interface is directly displayed without pausing the screen recording service.
  • the user can control the start of screen recording through the screen recording control interface. Since the virtual reality device 500 will output the currently displayed image after the screen recording starts, in order to avoid the problem that the screen recording control interface is blocked during the start period of the screen recording video.
  • the user can implement the screen recording function through the following interactive operations: in a common application scenario, the user can start the screen recording function by double-clicking the Home button of the handle controller. After the system receives the double-click event of the handle button, it wakes up the operation controls in the shortcut center, as shown in Figure 12. Then click the screen recording button through the handle operation to initialize the screen recording service, that is, the screen recording service is in the ready state.
  • the virtual reality device 500 will automatically wake up the timer control, start the countdown, and start the screen recording after the countdown ends, that is, the screen recording service is in the running state.
  • the timing control starts timing and is always displayed on the top layer of the scene to remind the user of the recorded duration, as shown in Figure 13.
  • the virtual reality device 500 is controlled to suspend the screen recording service, that is, the screen recording service is in the pause state.
  • the timing control also pauses the timing, and can still remain displayed on the top layer, as shown in Figure 14.
  • the screen recording service continues to run, that is, the screen recording service is in the running state again, and then the shortcut center control is hidden, and the timing control continues to time. If the user wants to end the screen recording at this time, click the screen recording button to stop the screen recording service, as shown in Figure 15. Then hide the shortcut center control, and the timing control also stops timing and hides.
  • the screen recording interaction method provided in the above-mentioned embodiments can alleviate the screen recording related control actions from obscuring the user interface during the screen recording process, so as to obtain better screen recording video files.
  • the screen recording function is simple and quick to operate, and can respond quickly and feedback information in real time.
  • the controller is further configured to:
  • the virtual reality device 500 may detect the currently displayed interface type after the user inputs a control instruction for waking up the screen recording control interface, so as to determine whether to display the media asset playing interface. If the interface type is the media asset playback interface, the media asset playback process is suspended after obtaining the control command, that is, the media asset playback process and the screen recording service are both suspended, and after the interactive command input by the user through the screen recording control interface is executed, continue Media asset playback process, so as to avoid the problem of media asset screen content played during the lack of interactive instructions in the screen recording file due to the suspension of the screen recording service.
  • the virtual reality device 500 may present a media asset playing interface.
  • the user clicks the start screen recording option through the shortcut center to start and run the screen recording service, thereby recording the played media assets.
  • the media asset playback process is also paused at 0:17:03, and continues after the user completes the interactive operation. Play the media from 0:17:03 to alleviate the problem of missing part of the recording screen.
  • some embodiments of the present application further provide a screen recording interaction method, which can be applied to the virtual reality device 500 to realize screen recording.
  • the screen recording interaction method includes the following steps:
  • S3 Execute the interactive instruction input by the user through the screen recording control interface, and when the interactive instruction is input through the end recording option, end the running of the screen recording service, so that the generated screen recording file does not include pause and run screen recording User interface during service.
  • the virtual reality device and the screen recording interaction method can suspend the running of the screen recording service when the user wakes up the screen recording control interface, and display the screen recording control interface for the user to control the screen through the screen recording.
  • the interface inputs interactive commands to control the screen recording process.
  • the screen recording interaction method can suspend the operation of the screen recording service, that is, suspend the recording of the current screen display content, so that the video file obtained by the screen recording does not contain the screen corresponding to the screen recording control interface, so as to relieve the screen recording control interface from blocking the screen content. .
  • the displayed content can be saved in real time until the screen recording ends.
  • the video image obtained by the screen recording operation also changes with the user's interactive action as the user continues to use it. Since the screen recording process usually lasts for a certain period of time, during the screen recording, the picture displayed on the virtual reality device 500 may change with the user's wearing action.
  • the viewing angle can be adjusted, so that the virtual reality device 500 can display the user interface under the new viewing angle, and the video image obtained by the screen recording will also start from the initial The screen corresponding to the user interface under the viewing angle becomes the screen corresponding to the user interface under the new viewing angle.
  • the gesture sensor can detect the user's movement process to generate user gesture data.
  • the generated user gesture data is then transmitted to the controller, so that the controller can adjust the screen contents in the left display and the right display according to the user gesture data.
  • the gesture sensor since the virtual reality device 500 needs to be worn on the user's head, during the wearing process, when the user's head moves slightly unconsciously during the screen recording process, the gesture sensor will also detect the unconscious slight movement, and Trigger the controller to adjust the displayed VR screen content. These unintentional slight movements will cause frequent shaking of the video image obtained from the screen recording, reducing the picture quality of the screen recording output image.
  • an anti-shake screen recording method is provided, and the method can be applied to the virtual reality device 500 .
  • the virtual reality device 500 includes a display, a gesture sensor, and a controller.
  • the controller of the virtual reality device 500 can be configured to execute the following program steps:
  • the virtual reality device 500 may start to perform the screen recording function after receiving the control instruction input by the user, that is, save the screen content corresponding to the virtual reality device 500 after the screen recording is started according to the set screen recording parameters.
  • the virtual reality device 500 can monitor the user's gesture data in real time through the gesture sensor. That is, the user's head swinging action is detected through the sensor device of the gravitational acceleration sensor and the gyroscope.
  • the smoothing process is to filter the data detected by the attitude sensor through a filtering algorithm to remove instantaneous fluctuations in the attitude data.
  • the virtual reality device 500 can extract the components of the angle detected by the gesture sensor on the x-axis, the y-axis and the z-axis from the user gesture data, and extract and output the screen recording of the previous frame
  • the attitude data corresponding to the previous frame of the screen recording image extracted can also be the components of the angle on the x-axis, the y-axis and the z-axis.
  • the equivalent attitude data is calculated according to the user attitude data and the attitude data when the previous frame of the screen recording image is output. That is, the equivalent attitude data:
  • X K X k-1 +(XD M -X k-1 )/(T M -T k-1 ) ⁇ c ⁇ (T k -T k-1 );
  • Y K Y k-1 +(YD M -Y k-1 )/(T M -T k-1 ) ⁇ c ⁇ (T k -T k-1 );
  • Z K Z k-1 +(ZD M -Z k-1 )/(T M -T k-1 ) ⁇ c ⁇ (T k -T k-1 );
  • X K , Y K , and Z K are the angles in the X-axis, Y-axis and Z-axis directions when the k-th frame of the recording image is output;
  • X K-1 , Y K-1 , and Z K-1 are the output The angles in the X-axis, Y-axis and Z-axis directions during the k-1 frame recording image;
  • XD M , YD M , and YD M are the Tk time attitude sensors detected in the X-axis, Y-axis and Z-axis directions
  • T M is the time when the attitude sensor data reports XD M , YD M , YD M data;
  • T k is the time of the kth frame;
  • Tk -1 is the time of the k-1th frame;
  • c is between 0
  • Between -1 is the empirical value constant.
  • the attitude data X K-1 , Y K-1 , Z K corresponding to the previous frame of the screen recording image can be extracted -1 and combine the reporting time of the attitude data and the interval time between two frames of images to calculate the equivalent attitude data. It can be seen that by referring to the posture data and related time parameters corresponding to the previous screen recording image, the adjustment process area of the image image by the virtual reality device 500 is smoothed, thereby reducing the jitter in the final image.
  • the virtual reality device 500 may capture a screen recording image in the rendering scene according to the smoothed user gesture data.
  • the rendering scene refers to a virtual scene constructed by the rendering engine of the virtual reality device 500 through a rendering program.
  • the virtual reality device 500 based on the unity 3D rendering engine can construct a unity 3D scene when presenting the display screen.
  • various virtual objects and functional controls can be added to render a specific usage scene. For example, when playing multimedia resources, you can add a display panel to the Unity 3D scene, which is used to present multimedia resources.
  • virtual object models such as seats, speakers, and characters can be added to the Unity 3D scene to create a cinematic effect.
  • the virtual reality device 500 may also set a virtual camera in the unity 3D scene.
  • the virtual reality device 500 can set a left-eye camera and a right-eye camera in the unity 3D scene according to the positional relationship of the user's eyes.
  • the displays output the rendered images respectively.
  • the angles of the two virtual cameras in the unity 3D scene can be adjusted in real time with the attitude sensor of the virtual reality device 500, so that when the user wears the virtual reality device 500 and moves, different viewing angles can be output in real time
  • the virtual reality device 500 can acquire multiple frames of screen recording images by rendering the scene, so as to generate a screen recording video file. For example, after the virtual reality device 500 starts to record the screen, it can acquire the image captured by the left-eye camera and/or the right-eye camera, and copy the image, so as to output the screen recording image. It is also possible to set a virtual camera specially used for screen recording, that is, a virtual screen recording camera, in the rendering scene, so that after the screen recording starts, the captured image is obtained through the virtual screen recording camera and output as a screen recording image.
  • a virtual camera specially used for screen recording that is, a virtual screen recording camera, in the rendering scene, so that after the screen recording starts, the captured image is obtained through the virtual screen recording camera and output as a screen recording image.
  • the virtual camera can be configured to follow the gesture data detected by the gesture sensor to adjust the shooting angle, after the gesture data is smoothed, the content of the image captured by the virtual reality device 500 in the rendered scene also tends to change gradually, achieving anti-shake. moving effect.
  • the virtual reality device 500 in order to perform smooth processing on the user gesture data, can set a virtual screen recording camera and control the shooting parameters of the virtual screen recording camera after acquiring the screen recording instruction input by the user, so that the virtual screen recording
  • the screen camera can output a smooth screen recording image, that is, as shown in Figure 18 and Figure 19, the controller can be further configured to perform the following program steps:
  • S430 Set a shooting angle of the virtual screen recording camera according to the smoothed user gesture data, so as to perform image shooting on the rendering scene.
  • the virtual screen recording camera is a software program that depends on a rendering scene, and is used to photograph the rendering scene to obtain a screen recording image.
  • the virtual screen recording camera may be an intermediate camera set independently of the left eye camera and the right eye camera.
  • the user uses the virtual reality device 500, it may be loaded into the rendering scene along with the application, so as to be enabled when the screen recording function is used.
  • the virtual screen recording camera may not perform shooting of the rendering scene, that is, it is in a dormant state and will not output a screen recording image.
  • the user will input control commands through interactive actions.
  • the virtual reality device 500 may start the virtual screen recording camera, start to capture images of the rendered scene, and output the screen recording video image to realize the screen recording function.
  • the virtual screen recording camera can perform image capture in the rendered scene in the same way as the left-eye camera or the right-eye camera. And receive the user's gesture data detected by the gesture sensor in real time, and adjust the shooting angle according to the user's gesture data.
  • the user pose data can be smoothed first and then input to the virtual screen recording camera.
  • the shooting angle of the virtual screen recording camera is set according to the smoothed user gesture data, so as to perform image shooting on the rendering scene.
  • the virtual reality device 500 may also load a virtual display camera in the rendering scene when the user uses the virtual reality device 500 .
  • the virtual display camera includes a left-eye camera and a right-eye camera
  • the virtual screen recording camera is set in the middle position between the left-eye camera and the right-eye camera.
  • the left-eye camera can simulate the user's left eye to shoot the left-eye image in the rendered scene; the right-eye camera can simulate the user's right eye to shoot the right-eye image in the rendered scene, and the virtual screen recording camera Render the scene for image capture to get screen recording images. Since the virtual screen recording camera is set at the middle position between the left eye camera and the right eye camera, the screen recording image output by the virtual screen recording camera is closer to the content of the display screen directly seen by the user.
  • the attitude data detected by the attitude sensor can be copied into two copies, one of which performs smoothing processing, and the processed attitude data is sent to the virtual screen recording camera; the other copy does not perform smoothing processing. Send directly to the left eye camera and right eye camera.
  • the virtual reality device 500 can also output a screen recording video file in a form specified by the user, that is, the controller is further configured to perform the following program steps:
  • S510 control the display to display a screen recording parameter setting interface
  • S520 Receive screen recording parameters input by the user through the screen recording parameter setting interface
  • S540 Set an output frame rate of a screen recording image of the virtual screen recording camera according to the screen recording frame rate.
  • the virtual reality device 500 may display a screen recording parameter setting interface during use, and the user may perform an interactive action to input the screen recording parameters through the user parameter interface.
  • the screen recording parameters include the size of the screen recording image and the screen recording frame rate.
  • the user can input the width 1920 and the height 1080 of the screen recording image through the text input box on the screen recording parameter setting interface, and drag the scroll bar.
  • the frame rate of the video file is set to 60 Hz, then the virtual reality device 500 can be controlled to output a screen recording video with a screen size of 1920 ⁇ 1080 and a frame rate of 60 Hz.
  • the virtual reality device 500 can also set the shooting mode of the virtual screen recording camera in the rendering scene according to the screen recording parameters input by the user.
  • the shooting mode setting may include two aspects.
  • the image size sets the shooting range of the virtual screen recording camera, and on the other hand, sets the output frame rate of the screen recording image of the virtual screen recording camera according to the screen recording frame rate.
  • the shooting range can be achieved by adjusting parameters such as the position and focal length of the virtual screen recording camera, so that the main content image in the rendering scene is filled with the screen recording image, so as to obtain a clearer screen recording image.
  • the output frame rate is the number of images captured by the virtual screen recording camera per unit of time. The higher the output frame rate, the smoother the final generated video image and the greater the amount of data processing when the corresponding file is generated. Therefore, the output frame rate should be controlled within a reasonable range, such as 30Hz-120Hz.
  • the virtual reality device 500 may set corresponding control options on the screen recording parameter setting interface according to the screen recording output mode supported by the device, for the user to select and input. And after the user inputs the screen recording parameters, the virtual reality device 500 performs the screen recording operation according to the output mode specified by the screen recording parameters, thereby realizing the output of the screen recording video file in the form specified by the user. That is, the virtual reality device 500 can extract the screen recording images captured in the rendering scene frame by frame, perform encoding on multiple frames of the screen recording images to generate a screen recording video file, and finally store or send the screen recording video file.
  • the virtual reality device 500 can extract the screen recording images captured in the rendering scene frame by frame, perform encoding on multiple frames of the screen recording images to generate a screen recording video file, and finally store or send the screen recording video file.
  • the virtual reality device 500 performs smoothing processing on the user gesture data to filter the influence of shaking during the screen recording process.
  • the specific value of the user's attitude data can also be judged, so that when the user's attitude data changes little, the shooting angle of the virtual camera can be locked to achieve an anti-shake effect.
  • a virtual reality device 500 is also provided, including: a display, a posture sensor, and a controller.
  • the display is configured to display a user interface
  • the gesture sensor is configured to detect user gesture data in real time; as shown in Figure 21, the controller is configured to execute the following program steps:
  • S610 Receive a control instruction for starting screen recording input by the user
  • S630 Calculate an attitude change amount according to the user attitude data
  • S640 Shoot a screen recording image from the rendering scene according to the attitude change amount, so as to output a screen recording image with a stable shooting angle when the attitude change amount is less than a preset shaking threshold.
  • the virtual reality device 500 after receiving the screen recording control instruction input by the user, the virtual reality device 500 can acquire user gesture data through the gesture sensor in response to the control instruction.
  • the difference is that, after acquiring the user attitude data, in this embodiment, the angle value detected in the user attitude data can be directly read, and the attitude change amount can be calculated according to the angle value.
  • the posture change amount is the difference between the user's posture data and the posture data of the previous frame of the screen recording image.
  • the user attitude data is the three-axis component of the tilt angle at the current moment in the space Cartesian coordinate system, namely ⁇ x , ⁇ y and ⁇ z ; and the corresponding attitude data during the previous frame of screen recording image is also the tilt at the previous frame moment.
  • the three-axis components of the angle in the space rectangular coordinate system, namely ⁇ x0 , ⁇ y0 and ⁇ z0 therefore, the attitude changes can be calculated as ( ⁇ x - ⁇ x0 ), ( ⁇ y - ⁇ y0 ) and ( ⁇ z - ⁇ z0 ).
  • the virtual reality device 500 can judge the specific value of the amount of attitude change, and shoot a screen-recorded image in the rendering scene, so that when the amount of attitude change is less than the preset jitter threshold, output a recording of a stable shooting angle screen image screen.
  • the virtual reality device 500 can remove the posture change caused by shaking, so that when the amount of posture change is small, the virtual camera can be controlled without changing the shooting angle, and a screen recording image with a stable shooting angle is output.
  • the virtual reality device 500 can compare the attitude change amount with the preset shaking threshold; if the attitude change amount is less than or equal to the preset shaking threshold, it means that the current attitude change amount is small, and it is very likely that Due to jitter, the screen recording image can be captured from the rendering scene according to the attitude data when the previous frame of screen recording image was output, so as to output the screen recording image with a stable shooting angle; if the amount of attitude change is greater than the preset jitter threshold, the The current posture change is large, and the posture change is caused by the user's active action during the wearing process. Therefore, a screen recording image can be taken from the rendering scene according to the obtained user posture data.
  • the virtual reality device 500 can also capture images of the rendered scene by loading a virtual screen recording camera in the rendered scene. And in the process of performing image capture, when the amount of attitude change is less than the preset jitter threshold, the current attitude data is not input to the virtual screen recording camera, so that the virtual screen recording camera can complete the shooting angle corresponding to the previous frame of attitude data. Capture the current frame image to output a screen recording image with a stable shooting angle. When the posture change is greater than the preset jitter threshold, the current posture data is directly sent to the virtual screen recording camera, so that the virtual screen recording camera can adjust the shooting angle according to the current posture data, so as to obtain the screen recording image from the new perspective.
  • posture data can also be compared between frames.
  • the virtual reality device 500 can acquire posture data every 5 frames, and calculate the posture changes between different acquisition orders, Therefore, it is judged whether the attitude change amount is less than the preset shaking threshold.
  • the virtual reality device 500 and the anti-shake screen recording method provided by the above embodiments can obtain user posture data after the user inputs a control command to start screen recording, and calculate the posture change amount according to the user posture data.
  • the amount of attitude change is less than the preset shake threshold, a screen recording image with a stable shooting angle can be output according to the attitude data when the previous frame of the screen recording image was output, so as to alleviate the impact caused by the shake during screen recording.
  • the static screen recording method provided in some embodiments of the present application may be applied to the virtual reality device 500 or applied to Augmented reality devices, wearable devices, VR gaming devices and other headsets with the same functional hardware.
  • the virtual reality device 500 may acquire initial gesture data and real-time gesture data in response to the control instruction.
  • the initial attitude data is the user attitude data recorded at the input moment of the control instruction;
  • the real-time attitude data is the user attitude data continuously detected by the attitude sensor after receiving the control instruction.
  • the virtual reality device 500 may start a program related to screen recording.
  • the virtual reality device 500 can detect the current user gesture data through the gesture sensor, and record the user gesture data at the moment when the user inputs the control command, as the initial gesture data.
  • the user can continue to perform conscious or unconscious actions while wearing the virtual reality device 500 .
  • the attitude sensor will detect changes in the attitude data caused by the user's actions, so as to adjust the displayed VR screen. Therefore, after the user inputs the control instruction, the virtual reality device 500 can also detect the user's gesture data frame by frame to obtain real-time gesture data.
  • the virtual reality device 500 may perform calculation according to the detected data content to calculate the action judgment amount.
  • the action judgment amount is the accumulated time when the angle difference between the real-time attitude data and the initial attitude data is greater than the preset angle threshold.
  • the virtual reality device 500 may first calculate the angle difference according to the real-time posture data and the initial posture data, that is, calculate the change amount of the viewing angle direction corresponding to the user's action process.
  • the virtual reality device 500 may first extract the orientation quaternion A(w 0 of the initial posture data in the process of calculating the action judgment amount , x 0 , y 0 , z 0 ) and the orientation quaternion B(w, x, y, z) in the extracted real-time pose data.
  • a -1 (w 0 , x 0 , y 0 , z 0 ) -1 .
  • the quaternion corresponding to the attitude difference C can also be normalized. , so as to calculate the angle difference according to the attitude difference C.
  • the angle difference ⁇ is calculated according to the following formula:
  • represents the angle difference
  • w represents the first element value in the quaternion B(w, x, y, z) corresponding to the real-time attitude data.
  • the virtual reality device 500 can judge the calculated angle difference to determine whether the angle difference exceeds a preset angle judgment threshold. That is, the virtual reality device 500 can compare the angle difference ⁇ with the preset angle threshold ⁇ 0 . If the angle difference is greater than the preset angle threshold, that is, ⁇ > ⁇ 0 , the user action corresponding to the current real-time gesture data is likely to be an unconscious action of the user. Similarly, if the angle difference is less than or equal to the preset angle threshold, that is, ⁇ 0 , the user action corresponding to the current real-time gesture data may be the user's active action.
  • the virtual reality device 500 can Lock the screen recording angle of view, that is, use the initial attitude data to set the screen recording angle of view during this period to generate stable screen recording data.
  • the virtual reality device The 500 can follow the user's actions to align the viewing angle direction of the screen recording to a new angle, that is, use the real-time attitude data to update the viewing angle direction of the screen recording.
  • the virtual reality device 500 can set the angle threshold to 20° and the judgment threshold to 0.2s, that is, the state where the angle difference caused by the user's action exceeds 20° lasts for more than 0.2s, and it is determined that the current real-time posture data is caused by the user's active action lead to. Therefore, after calculating and determining that the angle difference exceeds 20° for more than 0.2s, the virtual reality device 500 can align the screen recording angle to a new angle, that is, the angle corresponding to the latest frame of real-time attitude data, and continue to record the screen.
  • the virtual reality device 500 can always maintain the initial angle to perform the screen recording operation, that is, according to the user input control instruction.
  • the angle corresponding to the initial attitude data sets the screen recording direction.
  • the virtual reality device 500 determines that the action judgment amount is greater than the preset judgment threshold, and uses the real-time gesture data to update the viewing angle direction of the recording screen, the virtual reality device 500 enters the next detection cycle, that is, the virtual reality device 500 can use the
  • the real-time attitude data (w n , x n , y n , z n ) used to update the viewing angle direction of the recording screen are used as the initial attitude data of the next detection cycle, and the newly detected real-time attitude data (w n ) are used in the next detection cycle.
  • the attitude data (w n , x n , yn , z n ) are calculated for the angle difference and the accumulation time of the angle difference exceeding the angle threshold value state.
  • the virtual reality device 500 can continuously adjust the angles corresponding to the user's actions to complete the screen recording operation on the premise of maintaining the stability of the screen recording output image through multiple detection cycles.
  • the static screen recording method provided in the above embodiment can comprehensively analyze the state of angle change and time maintenance, accurately determine the reason for the change of the user's posture data, intelligently lock or unlock the screen recording angle, and make the output video picture more stable. .
  • the static screen recording function can be used as a screen recording mode, and is set in a specific control interface for the user to select.
  • a "static screen recording” option and a "dynamic screen recording” option can be set in the screen recording control interface of the virtual reality device 500. Users can click different options to set the screen recording method used in the screen recording process.
  • the virtual reality device 500 may, in response to the control instruction, parse the screen recording mode specified in the control instruction.
  • the screen recording mode may include static screen recording and dynamic screen recording. That is, when the user selects the static screen recording option in the screen recording setting interface, the screen recording method specified in the input control command is static screen recording; similarly, when the user selects the dynamic screen recording option in the screen recording setting interface, The screen recording mode specified in the input control command is dynamic screen recording.
  • the virtual reality device 500 may perform different screen recording processes for different screen recording modes. That is, if the screen recording method is static screen recording, the steps of acquiring the initial attitude data and the real-time attitude data are performed, and according to the method provided in the above-mentioned embodiment, the action judgment amount is calculated according to the initial attitude data and the real-time attitude data, and according to different Action judgment set different screen recording methods. If the screen recording method is dynamic screen recording, the virtual reality device 500 can detect the real-time attitude data through the attitude sensor according to the conventional screen recording method, and use the real-time attitude data to set the viewing angle direction of the screen recording, and cancel the virtual reality device 500 during the screen recording process. Lock the viewing angle of the screen recording, and generate the same screen recording data as the user's viewing content.
  • the virtual reality device 500 can provide a screen recording setting interface, so that the user can select a static screen recording or dynamic screen recording method based on the screen recording setting interface, so that the virtual reality device 500 can meet the personalization of different users. need.
  • the virtual reality device 500 may generate a screen recording image in real time according to the VR image and output the screen recording data.
  • the picture content corresponding to the screen recording picture may be the same as the picture displayed by the virtual reality device 500 .
  • the screen recording picture may directly multiplex the picture content displayed by the left monitor or the right monitor.
  • the screen recording angle needs to be locked when the user makes small and short movements unconsciously.
  • the images displayed on the left monitor and the right monitor change the viewing angle in real time following the user's posture. Therefore, the content of the recorded screen without frequent shaking may be partially different from the content of the screen actually viewed by the user.
  • the virtual reality device 500 may set an independent virtual screen recording camera specially used for screen recording operation, and use the virtual screen recording camera to capture images of the rendered scene to generate a video recording screen data. That is, in the step of acquiring the initial posture data and the real-time posture data, the virtual reality device 500 may first add a virtual screen recording camera to the rendering scene.
  • the virtual reality device 500 may further set the output frame rate of the screen recording data, and shoot a plurality of consecutive frame images of the current rendering scene according to the set output frame rate to generate the screen recording data .
  • Using a virtual screen recording camera independent of the left-eye camera and right-eye camera for screen recording can reduce the interference of the screen recording process to the user's normal viewing process, and because the virtual screen recording camera can be maintained at a fixed value for a long period of time. Therefore, the virtual reality device 500 can ensure that the recorded video content will not shake frequently, providing users with a better look and feel.
  • the virtual reality device 500 judges the user's action mainly based on the action judgment amount, that is, the cumulative time when the angle difference between the real-time posture data and the initial posture data is greater than the preset angle threshold.
  • the sampling frame rate of the attitude sensor is generally fixed, after the user wears the virtual reality device 500, the collected user attitude data can be fed back to the controller according to the sampling frame rate.
  • the sampling frame rate of the attitude sensor is 60 FPS (frames per second), that is, within 1s, the attitude sensor can feed back 60 frames of real-time attitude data to the display device 200, and the time interval between two adjacent frames of real-time attitude data is fixed 0.016s. Based on this, when calculating the motion judgment amount, the virtual reality device 500 may determine the accumulated time when the angle difference is greater than the preset angle threshold by accumulating the number of frames.
  • the virtual reality device 500 may sequentially acquire multiple frames of real-time posture data in the step of calculating the action judgment amount, and calculate the difference between the real-time posture data of each frame and the The angle difference of the initial attitude data is recorded, and the number of consecutive accumulated frames in which the angle difference is greater than the preset angle threshold is recorded, so as to determine the accumulated time when the angle difference is greater than the preset angle threshold by the recorded consecutive accumulated frames.
  • the amount of motion judgment is greater than the preset threshold; if the number of consecutively accumulated frames is less than or equal to the preset threshold of accumulated frames, it is determined that the amount of motion judgment is less than or equal to the preset threshold threshold.
  • the virtual reality device 500 may set the cumulative frame number threshold to 100 frames, and the cumulative time corresponding to the state where the angle difference is greater than the preset angle threshold is about 1.5s. Therefore, after the virtual reality device 500 determines that the angle difference between one frame of real-time attitude data and the initial attitude data is greater than the preset angle threshold, that is, ⁇ 1 > ⁇ 0 , the virtual reality device 500 can perform the following steps for each frame of user attitude data after the frame of real-time attitude data.
  • the angle difference between the accumulated 100 frames of real-time attitude data and the initial attitude data is greater than the preset angle threshold, that is, ⁇ 2 > ⁇ 0 , ⁇ 3 > ⁇ 0 , ⁇ 4 > ⁇ 0 . . . ... then it is determined that the cumulative time of the state that the angle difference is greater than the preset angle threshold exceeds 1.5s, that is, the action judgment amount is greater than the preset judgment threshold.
  • the angle difference between the real-time attitude data of any frame and the initial attitude data is less than or equal to the preset angle threshold, such as ⁇ 10 ⁇ 0 , it is determined that the angle difference is greater than the preset angle threshold in a detection period.
  • the cumulative time does not exceed 1.5s, that is, the action judgment amount is less than or equal to the preset judgment threshold.
  • the virtual reality device 500 may also record the number of consecutive accumulated frames in which the angle difference is greater than the preset judgment threshold.
  • a count variable N is created to store the cumulative number of frames in the state where the angle difference is greater than the preset angle threshold.
  • the virtual reality device 500 detects that each frame of real-time attitude data is updated, it judges whether the actual direction corresponding to the current real-time attitude data and the screen recording direction corresponding to the initial attitude data deviate by more than 20 degrees through direction comparison.
  • the virtual reality device 500 may judge the angle difference for each frame of real-time attitude data, and determine whether the recording angle needs to be adjusted according to the frame-by-frame judgment result. Such a judgment method has high precision, but the virtual reality device 500 needs to perform analysis and calculation frame by frame, so that the judgment process needs to occupy more computing resources and increases the processor load of the virtual reality device 500 . As shown in FIG. 28 , in order to reduce the load on the processor, in some embodiments, the virtual reality device 500 may also compare the angle difference with a preset angle threshold in the step of calculating the action judgment amount, and record that the angle difference is greater than the preset angle difference. The acquisition time of the real-time pose data at the angle threshold.
  • the virtual reality device 500 can record the acquisition time corresponding to the real-time gesture data (w 1 , x 1 , y 1 , z 1 ), that is, 16:00:01:000.
  • the virtual reality device 500 may start a new detection period, and within the preset detection period after the acquisition time, extract multiple frames of user gesture data at intervals.
  • the extraction interval of the multi-frame user gesture data may be determined according to time or the number of frames.
  • the virtual reality device 500 may extract a frame of real-time pose data every 0.1s in a detection period of 1.5s.
  • the virtual reality device 500 may extract one frame of real-time posture data every 10 frames within a detection period of 1.5s.
  • the virtual reality device 500 may perform judgment again on the extracted multiple frames of user gesture data to determine whether the angle difference between the multiple frames of user gesture data and the initial gesture data exceeds a preset angle threshold. If the angle difference between the extracted multi-frame user posture data and the initial posture data is greater than the preset angle threshold, it can be determined that the action judgment amount is greater than the preset judgment threshold; if the angle difference between the extracted user posture data and the initial posture data of any frame is are less than or equal to the preset angle threshold, it can be determined that the action judgment amount is less than or equal to the preset judgment threshold.
  • the data processing amount of the virtual reality device 500 can be greatly reduced on the premise of satisfying the basic action judgment accuracy, and by controlling the extraction interval time or interval frame number to meet different user needs.
  • the interval time can be shortened or the number of interval frames can be reduced to increase the number of samples, so that the recording angle can be aligned with the actual angle in time.
  • the interval time can be extended or the number of interval frames can be increased to reduce the number of samples and the amount of data processing.
  • the virtual reality device 500 may also be provided with a load detection module, such as an MCU monitoring module, a memory detection module, and a temperature monitoring module.
  • a load detection module such as an MCU monitoring module, a memory detection module, and a temperature monitoring module.
  • the current load state of the virtual reality device 500 can be detected through the load detection module, and the interval time or the interval frame number can be dynamically set according to the current load state.
  • the virtual reality device 500 can execute the static screen recording method through the controller, so that the virtual reality device 500 can perform the initial posture data and real-time posture data after the user inputs the control command to start the screen recording. Detect and calculate the action judgment amount of the real-time posture data relative to the initial posture data, wherein the action judgment amount is the accumulation time when the angle difference between the real-time posture data and the initial posture data is greater than a preset angle threshold. When the action judgment amount does not exceed the preset judgment threshold, it can be determined that the change of the user's posture data is caused by the user's unconscious small movements, and the initial posture data can still be used to set the viewing angle direction of the screen recording to generate a screen recording containing stable video content.
  • the action judgment amount exceeds the preset judgment threshold, it is determined that the change of the user's posture data is caused by the user's active action, so the real-time posture data can be used to update the viewing angle direction of the screen recording.
  • the method of the virtual reality device 500 can synthesize the state of angle change and time maintenance, accurately determine the reason for the change of the user's posture data, intelligently lock or unlock the screen recording angle, and make the output video picture more stable.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供的虚拟现实设备,可以在用户控制开始录屏后,对用户姿态数据执行平滑处理,根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。

Description

虚拟现实设备
本申请要求在2021年1月18日提交中国专利局、申请号为202110065015.6、发明名称为“一种虚拟现实设备及快捷交互方法”的中国专利申请的优先权,在2021年1月18日提交中国专利局、申请号为202110065120.X、发明名称为“一种虚拟现实设备及快捷交互方法”的中国专利申请的优先权,本申请要求在2021年3月16日提交中国专利局、申请号为202110280846.5、发明名称为“一种虚拟现实设备及防抖动录屏方法”的中国专利申请的优先权,本申请要求在2021年3月18日提交中国专利局、申请号为202110292608.6、发明名称为“一种显示设备及录屏交互方法”的中国专利申请的优先权,本申请要求在2021年8月25日提交中国专利局、申请号为202110980427.2、发明名称为“一种虚拟现实设备及静态录屏方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及虚拟现实设备。
背景技术
虚拟现实(Virtual Reality,VR)技术是通过计算机模拟虚拟环境,从而给人以环境沉浸感的显示技术。虚拟现实设备是一种应用虚拟显示技术为用户呈现虚拟画面的设备。通常,虚拟现实设备包括两个用于呈现虚拟画面内容的显示屏幕,分别对应于用户的左右眼。当两个显示屏幕所显示的内容分别来自于同一个物体不同视角的图像时,可以为用户带来立体的观影感受。
虚拟现实设备可以通过执行录屏操作,将一段时间内显示的内容以视频形式进行保存,以供后续查看或者发送至其他设备播放。通常,虚拟现实设备在执行录屏操作时,会直接将屏幕显示的内容按照特定的帧率进行截取,并按照时间顺序排列形成视频文件。
为了实现录屏功能,虚拟现实设备可以显示录屏控制界面,以供用户在录屏过程中,通过录屏控制界面执行录屏控制,例如开始/停止录制等。然而这些操作需要通过录屏控制界面完成,因此录屏控制时屏幕需要跳转显示录屏控制界面,导致录制的视频文件中包含录屏控制界面,例如,用户在结束录屏时,需要调出录屏控制界面并点击停止录屏按钮,而调出的录屏控制界面以及点击动作将会对用户想要录制的画面内容造成遮挡,降低用户体验。
发明内容
第一方面,本申请提供的虚拟现实设备,包括:显示器、姿态传感器以及控制器。其中,所述显示器被配置为显示用户界面;所述姿态传感器被配置为实时检测用户姿态数据;所述控制器被配置为执行以下程序步骤:
接收用户输入的用于开始录屏的控制指令;
响应于所述控制指令,对所述用户姿态数据执行平滑处理;
根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
基于上述虚拟现实设备,本申请第一方面还提供的防抖动录屏方法,包括以下步骤:
接收用户输入的用于开始录屏的控制指令;
响应于所述控制指令,对所述用户姿态数据执行平滑处理;
根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
由以上技术方案可知,本申请第一方面提供的虚拟现实设备及防抖动录屏方法,可以在用户控制开始录屏后,对用户姿态数据执行平滑处理,根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像。所述方法可以通过滤波操作对细微摆动造成的用户姿态数据变化进行过滤,从而在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面,缓解录屏时抖动造成的影响。
第二方面,本申请提供的虚拟现实设备,包括:显示器、姿态传感器以及控制器。其中,所述显示器被配置为显示用户界面;所述姿态传感器被配置为实时检测用户姿态数据;所述控制器被配置为执行以下程序步骤:
接收用户输入的用于开始录屏的控制指令;
响应于所述控制指令,通过所述姿态传感器获取用户姿态数据;
根据所述用户姿态数据计算姿态变化量,所述姿态变化量为用户姿态数据与前一帧录屏图像时的姿态数据的差值;
根据所述姿态变化量从渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
基于上述虚拟现实设备,本申请第二方面还提供的防抖动录屏方法,包括以下步骤:
接收用户输入的用于开始录屏的控制指令;
响应于所述控制指令,通过所述姿态传感器获取用户姿态数据;
根据所述用户姿态数据计算姿态变化量,所述姿态变化量为用户姿态数据与前一帧录屏图像时的姿态数据的差值;
根据所述姿态变化量从渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
三方面,本申请提供的显示设备,包括:显示器和控制器,其中,所述显示器被配置为显示用户界面;所述控制器被配置为执行以下程序步骤:
获取用户输入的用于唤醒录屏控制界面的控制指令,所述录屏控制界面至少包括结束录制选项和继续录制选项;
响应于所述控制指令,如果所述显示设备处于录屏过程中,暂停运行录屏服务,以及在暂停录屏服务后控制所述显示器显示所述录屏控制界面;
执行用户通过所述录屏控制界面输入的交互指令;当所述交互指令通过所述结束录制选项输入时,结束运行录屏服务,以使生成的录屏文件中不包含暂停运行录屏服务期间的用户界面。
四方面,本申请还提供的录屏交互方法,应用于上述显示设备,所述录屏交互方法包括以下步骤:
获取用户输入的用于唤醒录屏控制界面的控制指令,所述录屏控制界面至少包括 结束录制选项和继续录制选项;
响应于所述控制指令,如果所述显示设备处于录屏过程中,暂停运行录屏服务,以及在暂停录屏服务后控制所述显示器显示所述录屏控制界面;
执行用户通过所述录屏控制界面输入的交互指令;当所述交互指令通过所述结束录制选项输入时,结束运行录屏服务,以使生成的录屏文件中不包含暂停运行录屏服务期间的用户界面。
附图说明
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中包括虚拟现实设备的显示系统结构示意图;
图2为本申请实施例中VR场景全局界面示意图;
图3为本申请实施例中全局界面的推荐内容区域示意图;
图4为本申请实施例中通过状态栏进入快捷中心示意图;
图5为本申请实施例中通过按键进入快捷中心示意图;
图6为本申请实施例中录屏过程中界面示意图;
图7为本申请实施例中虚拟现实设备录屏交互流程示意图;
图8为本申请实施例中执行交互指令时录屏控制界面显示流程示意图;
图9为本申请实施例中执行停止录屏指令的流程示意图;
图10为本申请实施例中执行退出指令时录屏控制界面显示流程示意图;
图11为本申请实施例中检测录屏服务运行状态的流程示意图;
图12为本申请实施例中录屏前唤醒录屏控制界面示意图;
图13为本申请实施例中录屏中显示界面示意图;
图14为本申请实施例中录屏过程中唤醒录屏控制界面示意图;
图15为本申请实施例中结束录屏操作时界面示意图;
图16为本申请实施例中媒资播放界面唤醒录屏控制界面流程示意图;
图17为本申请实施例中一种防抖动录屏方法的流程示意图;
图18为本申请实施例中通过虚拟录屏相机获取录屏图像的流程示意图;
图19为本申请实施例中设置渲染场景中虚拟录屏拍摄角度的流程示意图;
图20为本申请实施例中设置录屏参数的流程示意图;
图21为本申请实施例中另一种防抖动录屏方法的流程示意图;
图22为本申请实施例中静态录屏方法流程示意图;
图23为本申请实施例中静态录屏方法时序关系示意图;
图24为本申请实施例中录屏设置界面示意图;
图25为本申请实施例中根据录屏方式设置录屏视角方向的流程示意图;
图26为本申请实施例中通过累计帧数确定动作判断量流程示意图;
图27为本申请实施例中根据计数变量调整录屏方向的流程示意图;
图28为本申请实施例中间隔提取多帧用户姿态数据时的流程示意图。
具体实施方式
为使本申请示例性实施例的目的、技术方案和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施例中的技术方案进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
基于本申请中示出的示例性实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。此外,虽然本申请中公开内容按照示范性一个或几个实例来介绍,但应理解,可以就这些公开内容的各个方面也可以单独构成一个完整技术方案。
应当理解,本申请中说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,例如能够根据本申请实施例图示或描述中给出那些以外的顺序实施。
此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖但不排他的包含,例如,包含了一系列组件的产品或设备不必限于清楚地列出的那些组件,而是可包括没有清楚地列出的或对于这些产品或设备固有的其它组件。
本申请中使用的术语“模块”,是指任何已知或后来开发的硬件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。
本说明书通篇提及的“多个实施例”、“一些实施例”、“一个实施例”或“实施例”等,意味着结合该实施例描述的具体特征、结构或特性包括在至少一个实施例中。因此,本说明书通篇出现的短语“在多个实施例中”、“在一些实施例中”、“在至少另一个实施例中”或“在实施例中”等并不一定都指相同的实施例。此外,在一个或多个实施例中,具体特征、结构或特性可以任何合适的方式进行组合。因此,在无限制的情形下,结合一个实施例示出或描述的具体特征、结构或特性可全部或部分地与一个或多个其他实施例的特征、结构或特性进行组合。这种修改和变型旨在包括在本申请的范围之内。
本申请实施例中以虚拟现实设备500为例,对录屏交互方式进行举例说明,应当理解的是,本申请提供的录屏交互方法,也可以应用于其他显示设备200中,例如,显示设备200可以是智能电视、智能终端、个人计算机等。
本申请实施例中,所述虚拟现实设备500泛指能够佩戴于用户头部,为用户提供沉浸感体验的显示设备,包括但不限于VR眼镜、增强现实设备(Augmented Reality,AR)、VR游戏设备、移动计算设备以及其它可穿戴式计算机等。本申请部分实施例以VR眼镜为例对技术方案进行阐述,应当理解的是,所提供的技术方案同时可应用于其他类型的虚拟现实设备。所述虚拟现实设备500可以独立运行,或者作为外接设备接入其他智能显示设备,其中,所述显示设备可以是智能电视、计算机、平板电脑、服务器等。
虚拟现实设备500可以在佩戴于用户头部后,显示媒资画面,为用户双眼提供近距离影像,以带来沉浸感体验。为了呈现媒资画面,虚拟现实设备500可以包括多个用于显示画面和头部佩戴的部件。以VR眼镜为例,虚拟现实设备500可以包括但不限于外壳、位置固定件、光学系统、显示组件、姿态检测电路、接口电路等部件中的至少一种。实际应用中,光学系统、显示组件、姿态检测电路以及接口电路可以设置 于外壳内,以用于呈现具体的显示画面;外壳两侧连接位置固定件,以佩戴于用户头部。
在使用时,姿态检测电路中内置有重力加速度传感、陀螺仪等姿态检测元件,当用户头部移动或转动时,可以检测到用户的姿态,并将检测到的姿态数据传递给控制器等处理元件,使处理元件可以根据检测到的姿态数据调整显示组件中的具体画面内容。
在一些实施例中,如图1所示的虚拟现实设备500可以接入显示设备200,并与服务器400之间构建一个基于网络的显示系统,在虚拟现实设备500、显示设备200以及服务器400之间可以实时进行数据交互,例如显示设备200可以从服务器400获取媒资数据并进行播放,以及将具体的画面内容传输给虚拟现实设备500中进行显示。
在一些实施例中,虚拟现实设备500的显示组件包括显示屏幕以及与显示屏幕有关的驱动电路。为了呈现具体画面,以及带来立体效果,显示组件中可以包括两个显示屏幕,分别对应于用户的左眼和右眼。在呈现3D效果时,左右两个屏幕中显示的画面内容会稍有不同,可以分别显示3D片源在拍摄过程中的左相机和右相机。由于用户左右眼观察到的画面内容,因此在佩戴时,可以观察到立体感较强的显示画面。
虚拟现实设备500中的光学系统,是由多个透镜组成的光学模组。光学系统设置在用户的双眼与显示屏幕之间,可以通过透镜对光信号的折射以及透镜上偏振片的偏振效应,增加光程,使显示组件呈现的内容可以清晰的呈现在用户的视野范围内。同时,为了适应不同用户的视力情况,光学系统还支持调焦,即通过调焦组件调整多个透镜中的一个或多个的位置,改变多个透镜之间的相互距离,从而改变光程,调整画面清晰度。
虚拟现实设备500的接口电路可以用于传递交互数据,除上述传递姿态数据和显示内容数据外,在实际应用中,虚拟现实设备500还可以通过接口电路连接其他显示设备或外设,以通过和连接设备之间进行数据交互,实现更为复杂的功能。例如,虚拟现实设备500可以通过接口电路连接显示设备,从而将所显示的画面实时输出至显示设备进行显示。又例如,虚拟现实设备500还可以通过接口电路连接手柄,手柄可以由用户手持操作,从而在VR用户界面中执行相关操作。
其中,所述VR用户界面可以根据用户操作呈现为多种不同类型的UI布局。例如,用户界面可以包括全局界面,AR/VR终端启动后的全局UI如图2所示,所述全局UI可显示于AR/VR终端的显示屏幕中,也可显示于所述显示设备的显示器中。全局UI可以包括推荐内容区域1、业务分类扩展区域2、应用快捷操作入口区域3以及悬浮物区域4中的至少一种。
推荐内容区域1用于配置不同分类TAB栏目;在所述栏目中可以选择配置媒资、专题等;所述媒资可包括2D影视、教育课程、旅游、3D、360度全景、直播、4K影视、程序应用、游戏、旅游等具有媒资内容的业务,并且所述栏目可以选择不同的模板样式、可支持媒资和专题同时推荐编排,如图3所示。
在一些实施例中,推荐内容区域1的顶部还可以设置有状态栏,在状态栏中可以设置有多个显示控件,包括时间、网络连接状态、电量等常用选项。状态栏中包括的内容可以由用户自定义,例如,可以添加天气、用户头像等内容。状态栏中所包含的内容可以被用户选中,以执行相应的功能。例如,用户点击时间选项时,虚拟现实设 备500可以在当前界面中显示时间设备窗口,或者跳转至日历界面。当用户点击网络连接状态选项时,虚拟现实设备500可以在当前界面显示WiFi列表,或者跳转至网络设置界面。
状态栏中显示的内容可以根据具体项目的设置状态呈现为不同的内容形式。例如,时间控件可以直接显示为具体的时间文字信息,并在不同的时间显示不同的文字;电量控件则可以根据虚拟现实设备500的当前电量剩余情况,显示为不同的图案样式。
状态栏用于使用户能够执行常用的控制操作,实现快速对虚拟现实设备500进行设置。由于对虚拟现实设备500的设置程序包括诸多项,因此在状态栏中通常不能将所有常用设置选项全部显示。为此,在一些实施例中,状态栏中还可以设置有扩展选项。扩展选项被选中后,可以在当前界面中呈现扩展窗口,在扩展窗口中可以进一步设置有多个设置选项,用于实现虚拟现实设备500的其他功能。
例如,在一些实施例中,扩展选项被选中后,可以在扩展窗口中设置“快捷中心”选项。用户在点击快捷中心选项后,虚拟现实设备500可以显示快捷中心窗口。快捷中心窗口中可以包括“截屏”、“录屏”以及“投屏”选项中的至少一个,用于分别唤醒相应的功能。
业务分类扩展区域2支持配置不同分类的扩展分类。如果有新的业务类型时,支持配置独立TAB,展示对应的页面内容。业务分类扩展区域2中的扩展分类,也可以对其进行排序调整及下线业务操作。在一些实施例中,业务分类扩展区域2可包括的内容:影视、教育、旅游、应用、我的。在一些实施例中,业务分类扩展区域2被配置为可展示大业务类别TAB,且支持配置更多的分类,其图标支持配置,如图3所示。
应用快捷操作入口区域3可指定预装应用靠前显示以进行运营推荐,支持配置特殊图标样式替换默认图标,所述预装应用可指定为多个。在一些实施例中,应用快捷操作入口区域3还包括用于移动选项目标的左向移动控件、右向移动控件,用于选择不同的图标。
在一些实施例中,可以通过外设执行交互,例如AR/VR终端的手柄可对AR/VR终端的用户界面进行操作,包括返回按钮;主页键,且其长按可实现重置功能;音量加减按钮;触摸区域,所述触摸区域可实现焦点的点击、滑动、按住拖拽功能。
用户可以通过全局UI界面执行交互操作,并在部分交互模式下,跳转到特定的界面中。例如,为了实现对媒资数据的播放,用户可以通过在全局UI界面中点击任一媒资链接图标,启动播放该媒资链接对应的媒资文件,此时,虚拟现实设备500可以控制跳转至媒资播放界面。
在跳转至特定的界面后,虚拟现实设备500还可以在播放界面的顶部显示状态栏,并依据设定的交互方式执行相应的设置功能。例如,如图4所示,在虚拟现实设备500播放视频媒资时,用户若想要对媒资画面执行录屏操作,可以通过点击状态栏上的扩展选项,调出扩展窗口,并在扩展窗口中点击快捷中心选项,使虚拟现实设备500在播放界面上显示快捷中心窗口,最后点击扩展中心窗口中的“录屏”选项,使虚拟现实设备500执行录屏操作,对当前时刻之后的一段时间内所显示的画面以视频方式进行存储。
其中,状态栏可以在虚拟现实设备500播放媒资画面时隐藏,以避免对媒资画面造成遮挡。而当用户执行特定交互动作时,触发显示。例如,在用户未使用手柄执行 动作时,可以对状态栏进行隐藏,而在用户使用手柄执行动作时,显示状态栏。为此,虚拟现实设备500可以被配置为在播放媒资画面时对手柄中的方位传感器状态或者任一按钮的状态进行检测,当检测到方位传感器的检测值发生变化,或者按钮被按下时,可以控制在播放界面顶部显示状态栏。当检测到方位传感器在设定的时间内没有发生变化,或者按钮没有被按下,则控制在播放界面隐藏状态栏。
可见,上述实施例中,用户可以通过状态栏调出快捷中心,从而在快捷中心窗口中点击相应的选项来完成截屏、录屏以及投屏操作。还可以采用其他交互方式调用快捷中心,并显示快捷中心窗口。例如,如图5所示,用户可以通过双击手柄上的home键调用快捷中心窗口。
用户可以在快捷中心窗口中选择任一图标后,启动相应的功能。其中,相应的功能的启动方式,可以根据虚拟现实设备500的实际交互方式确定。
例如,对于媒资播放过程,用户可以选择仅对播放的媒资画面执行录制,或者对整个显示内容执行录屏。对于仅对播放的媒资画面执行录屏的情形,虚拟现实设备500可以通过获取未经过渲染引擎渲染3D场景的媒资数据(即解析视频文件获得的数据),并将其复制,以输出录屏结果。而对于整个显示内容执行录屏的情形,虚拟现实设备500可以对显示器显示的最终画面逐帧进行截屏,以获得多个连续的截屏图像,从而形成视频文件,输出录屏结果。
为了表示当前虚拟现实设备500正在执行录屏操作,在启动录屏功能后,虚拟现实设备500可以在播放界面中显示录屏相关的提示内容。例如,如图6所示,可以在播放界面的右上角区域显示一个常驻的录制符号,该录制符号可以由可闪烁的圆点和时间框组成,当执行录制功能时,圆点通过闪烁来提醒用户正在进行录屏,时间框可以记录录屏获得的视频时长。
需要说明的是,对于录制符号,可以选择是否添加在录屏结果文件中。当选择添加在视频文件中时,可以在录屏视频的右上角区域也显示录制符号,用于标注视频播放进程。当选择不添加在录屏结果文件中时,则在录屏视频中不会带有录制符号。显然,这两种方式在屏幕录制的过程中,需要执行不同的录制程序。即添加录制符号时,虚拟现实设备500需要对所有图层内容叠加结果进行逐帧截取后叠加显示;而不添加录制符号时,虚拟现实设备500不对具有录制符号的图层执行屏幕截取,而是对其他图层内容叠加结果进行逐帧截取。
在一些实施例中,执行录屏操作时虚拟现实设备500还可以在当前界面中显示文字提示窗口(toast),用于提示用户当前已开始录屏或引导用户执行录屏相关的交互操作。例如,显示的文字提示窗口中可以包括文字内容为“已开始屏幕录制”、“再次点击录屏按钮结束录制”等。同理,为了避免文字提示窗口影响屏幕录制过程,文字提示窗口可以在显示后的预设时间内停止显示。例如,toast显示2s后消失,同时显示常驻录制符号,并开始计时。
可见,在上述录屏操作过程中,用户可以通过虚拟现实设备500提供的快捷中心界面开启和关闭录屏功能,并且还可以利用快捷中心界面对录屏过程进行控制。但由于快捷中心界面会在显示时对原界面进行遮挡,并使录屏获得的视频画面中,包含快捷中心界面对应的画面内容。
例如,在虚拟现实设备500进行录屏操作时,如果用户想要结束录屏,则需要先 唤醒快捷中心界面,并在快捷中心界面中点击“结束录屏”控件,输入结束录屏交互指令,此时虚拟现实设备500会停止运行截屏服务,并保存截屏视频文件。由于在结束录屏前,用户唤醒了快捷中心界面,因此在获得的录屏视频临近结束的时间段内,会包含有显示快捷中心界面那一部分的画面,对所要录制的界面内容造成遮挡,降低用户体验。
为了缓解录屏视频文件中快捷中心对所显示界面的遮挡,提高用户体验,在本申请的部分实施例中提供的虚拟现实设备500,虚拟现实设备500可通过运行录屏交互方法,使录屏所获得的视频文件中,不包含显示录屏控制界面时的画面。所述虚拟现实设备500包括:显示器和控制器。其中,显示器用于显示用户界面以及快捷中心等录屏控制界面,如图7所示,所述控制器被配置为执行以下程序步骤:
S1:获取用户输入的用于唤醒录屏控制界面的控制指令。
本实施例中,用于唤醒录屏控制界面的控制指令可根据虚拟现实设备500操作系统中设定的交互策略完成输入。例如,虚拟现实设备500在显示用户界面的同时,可以在界面中的特定位置,如顶部状态栏中显示快捷中心按钮,当用户点击快捷中心按钮时,虚拟现实设备500将在当前界面的基础上显示快捷中心界面。此时,用于唤醒录屏控制界面的控制指令由上述点击快捷中心按钮的操作动作完成输入。
用户还可以通过快捷键交互的方式,完成唤醒录屏控制界面控制指令输入。例如,用户可以通过双击操作手柄上的Home键调出快捷中心界面,即唤醒录屏控制界面,此时,用于唤醒录屏控制界面的控制指令由上述双击Home键的操作动作完成输入。
此外,针对部分虚拟现实设备500,用户还可以借助外接的硬件交互设备或集成的软件交互系统完成控制指令的输入。例如,可以在虚拟现实设备500中内置智能语音系统,用户可以通过麦克风等音频输入设备输入语音信息,如“录屏控制”等。智能语音系统通过对用户语音信息进行转化、分析、处理等方式识别语音信息的含义,并根据识别结果生成控制指令,以控制虚拟现实设备500唤醒录屏控制界面。此时用于唤醒录屏控制界面的控制指令由上述语音输入过程完成输入。
S2:响应于所述控制指令,如果虚拟现实设备处于录屏过程中,暂停运行录屏服务,以及在暂停录屏服务后控制显示器显示录屏控制界面。
在用户通过上述任一种交互方式输入控制指令后,由于虚拟现实设备500将会在当前用户界面基础上显示录屏控制界面,而显示录屏控制界面将会对当前用户界面的部分区域造成遮挡,因此为了减少录屏控制界面的干扰,在本实施例中,虚拟现实设备500可以响应于该控制指令,对虚拟现实设备500是否处于录屏过程进行检测,当检测到虚拟现实设备500处于录屏过程中时,可以暂停运行录屏服务。
其中,录屏服务是指在虚拟现实设备500的操作系统中集成的与录屏相关的控制程序或控制程序的集合。当用户控制开始录屏时,控制器可以通过运行该控制程序,实现屏幕录制功能,即按照录制帧率持续输出多个连续帧图像。因此,在本实施例中,暂停录屏服务是指在接收到控制指令后,控制器暂停执行截屏相关程序,并在暂停期间内,不再输出连续帧图像。
显然,当检测到虚拟现实设备500不在录屏过程中,录屏控制界面的显示不会影响到录屏过程,因此可以直接在用户界面中显示录屏控制界面,以便用通过录屏控制界面执行开始录屏以及其他设备交互指令。
可见,由于虚拟现实设备500在接收到控制指令后,暂停运行录屏服务,因此所显示的录屏控制界面将不会被录屏服务录制,使得录屏获得的视频文件中不包括暂停期间内的显示画面,也就不包含录屏控制界面相关的内容。
S3:执行用户通过所述录屏控制界面输入的交互指令。
在暂停运行录屏服务后,虚拟现实设备500可以显示录屏控制界面,以供用户执行录屏控制相关的交互操作。例如,在用户双击手柄home键唤醒快捷中心后,虚拟现实设备500可以控制暂停录屏服务,并在暂停录屏服务后显示快捷中心界面。用户可以通过点击快捷中心界面上的结束录屏控件,控制虚拟现实设备500结束录屏,从而将录屏所获得的视频文件进行保存,并且保存的录屏文件中不包含暂停运行录屏服务期间的用户界面。
需要说明的是,本实施例中录屏控制界面并不局限于对录屏过程的控制,还可以在录屏控制界面中设置其他常用的功能控件,以便用户在录屏过程中完成其他处理。即在用户唤醒录屏控制界面后,虚拟现实设备500还可以根据录屏控制界面中的设置自定义功能控件,进一步执行其他交互操作。例如,在显示快捷中心界面后,用户可以通过点击快捷中心界面上的“截屏”按钮,将当前时刻的用户界面截屏保存为图片文件。由于虚拟现实设备500在执行部分功能时,也存在相应的图像变化,例如截屏过程中会显示截屏动画,因此通过暂停录屏服务,还可以减少录屏结果中,其他操作对应动画效果的遮挡,使录屏获得的视频更加流畅。
另外,在执行录屏期间,用户还可以通过多次交互动作输入各种功能的控制指令。当控制指令用于唤醒录屏控制界面时,则按照上述方式暂停录屏,以使生成的录屏文件中不包含暂停运行录屏服务期间的用户界面;而当控制指令不是用于唤醒录屏控制界面时,则可以按照操作系统规定的界面显示方式显示各种用户界面,从而使生成的录屏文件中包含除唤醒录屏控制界面外的其他交互动作相关的用户界面内容。
在上述实施例中,虚拟现实设备500可以在用户唤醒录屏控制界面时,通过暂停录屏服务,缓解录屏控制界面对所显示用户界面的遮挡,以避免录屏获得的视频中包含录屏控制界面相应的内容。同理,为了继续进行录屏操作,在用户通过录屏控制界面完成相应的控制后,虚拟现实设备500还可以自动隐藏录屏控制界面,并继续运行录屏服务,即如图8所示,在一些实施例中,执行用户通过所述录屏控制界面输入的交互指令的步骤还包括:
S310:接收用户通过所述继续录制选项输入的继续录制指令;
S320:响应于所述继续录制指令,控制所述显示器隐藏所述录屏控制界面,以及恢复运行所述录屏服务。
在虚拟现实设备500暂停录屏服务并显示录屏控制界面后,可以实时检测用户在录屏控制界面上输入的交互指令,并通过执行该交互指令,实现对录屏过程的控制。例如,用户可以在录屏控制界面中点击“画面设置”按钮控件,并控制跳转至画面设置界面,以便执行亮度、色彩等相关画面设置。用户还可以在录屏控制界面中点击“追踪焦点”按钮,从而在后续录屏过程中,可以通过圆圈等标记符号实时追踪焦点光标位置,以便能够更清晰的表现操作过程。
在执行交互指令后,虚拟现实设备500还可以自动将录屏控制界面进行隐藏,并恢复运行录屏服务,以继续执行录屏操作。例如,用户在录屏过程中双击手柄上的home 键唤醒快捷中心界面时,虚拟现实设备500会暂停录屏服务并显示快捷中心界面。用户再通过快捷中心界面完成其他交互操作,如点击“投屏”按钮控件,控制虚拟现实设备500进行投屏操作等。待虚拟现实设备500完成投屏相关操作后,会自动隐藏快捷中心界面,并恢复运行录屏服务,以便继续对当前用户界面执行录屏操作。
显然,用户通过录屏控制界面所执行的交互操作可以包括直接点击录屏控制界面上的控件,以及与录屏控制界面相关的其他操作。其中,直接点击录屏控制界面上的控件的交互操作依赖于录屏控制界面中的控件布置方式。例如,用户可以通过点击录屏控制界面上的“结束投屏”按钮控件,控制虚拟现实设备500结束投屏;以及通过退出录屏控制界面操作,控制虚拟现实设备500继续运行录屏服务。
即如图9所示,在一些实施例中,执行所述交互指令的步骤中,所述控制器被进一步配置为:
S321:解析所述交互指令指定的控制动作;
S322:如果所述控制动作用于停止录屏,停止运行所述录屏服务;
S323:保存或发送录屏视频文件。
在执行录屏操作的过程中,用户可以通过调出录屏控制界面,并点击录屏控制界面上的“结束录屏”按钮,控制虚拟现实设备500停止录屏。因此,在显示录屏控制界面的过程中,虚拟现实设备500可以实时解析用户输入的交互指令所对应的控制动作,当交互指令为点击“结束录屏”按钮时,确定对应的控制动作用于停止录屏。对于这种控制动作的交互指令,虚拟现实设备500可以响应于该交互指令,停止运行录屏服务,并将录屏所获得的视频内容进行保存和/或发送,实现录屏功能。
其中,“结束录屏”按钮作为录屏控制界面中的一个交互控件,可以与“开始录屏”选项集成在一起。例如,在未开始录屏时,快捷中心中“录屏”选项的功能为开始录屏,而在录屏过程中,快捷中心“录屏”选项的功能为结束录屏。
基于此,用户输入用于停止录屏指令的操作方式可以为:在录屏过程中,用户通过双击手柄上的home键,唤醒快捷中心。此时,由于录屏服务正在运行中(处于暂停状态),快捷中心界面中“录屏”选项位置显示的控件为“结束录屏”选项,因此在用户点击“结束录屏”选项后,即输入结束录屏指令。
在一些实施例中,对于与录屏控制界面相关的操作的交互指令,如在虚拟现实设备500显示录屏控制界面时,如果用户按下手柄上的“返回”键,则表示用户不再使用录屏控制界面进行控制,因此虚拟现实设备500可以隐藏录屏控制界面,并继续运行录屏服务。即如图10所示,控制所述显示器显示所述录屏控制界面的步骤后,所述控制器被进一步配置为:
S341:获取用户输入的用于退出录屏控制界面的退出指令;
S342:响应于所述退出指令,隐藏所述录屏控制界面;
S343:恢复运行所述录屏服务。
本实施例中,退出指令是指用于关闭或隐藏录屏控制界面的指令,可以通过不同的交互方式完成输入。例如,用户可以在显示录屏控制界面时,通过按下手柄上的“返回”键,控制退出录屏控制界面;还可以通过点击录屏控制界面以外的区域,控制退出录屏控制界面。
在用户输入退出指令后,虚拟现实设备500可以隐藏录屏控制界面,以消除录屏 控制界面对下层所显示用户界面的遮挡,再恢复运行录屏服务,以通过录屏服务对所显示的用户界面继续执行录屏操作。
可见,在上述实施例中,在虚拟现实设备500显示录屏控制界面后,用户可以通过输入交互指令对录屏过程以及其他过程进行控制,并且虚拟现实设备500在执行交互指令或者用户退出录屏控制界面后,继续运行录屏服务,从而在缓解录屏控制界面遮挡当前界面的前提下,自动恢复录屏服务,以完成录屏操作。
由于对录屏的控制,依赖于录屏服务已经运行,而在实际应用中,用户也可能在未运行录屏服务的状态下,输入唤醒录屏控制界面的控制指令。此时,虚拟现实设备500可以跳过暂停录屏服务的步骤,直接显示录屏控制界面。即如图11所示,在一些实施例中,响应于所述控制指令,暂停录屏服务的步骤还包括:
S201:检测所述录屏服务的运行状态;
S202:如果所述录屏服务在运行中,标记所述虚拟现实设备处于录屏过程中;
S203:如果所述录屏服务不在运行中,标记所述虚拟现实设备未处于录屏过程中。
本实施例中,虚拟现实设备500在接收到唤醒录屏控制界面的控制指令后,可以响应于该控制指令,对录屏服务的当前运行状态进行检测。当检测到虚拟现实设备500已运行录屏服务,即当前虚拟现实设备500处于录屏过程中,则可以按照上述实施例中的方式暂停录屏服务,并控制显示器显示录屏控制界面;而当未检测到虚拟现实设备500正在运行录屏服务,即当前虚拟现实设备500未处于录屏过程中,则无需暂停录屏服务,而直接显示录屏控制界面。
可见,在本实施例中,当虚拟现实设备500未运行录屏服务时,用户可以通过录屏控制界面控制开始录屏。由于开始录屏后,虚拟现实设备500会输出当前显示的图像,因此为了避免在录屏视频的开始时段出现录屏控制界面遮挡的问题。
结合上述实施例,用户可以通过如下交互操作实现录屏功能:在普通应用场景中,用户可以通过双击手柄控制器的Home按键启动录屏功能。系统接收到手柄按键双击事件后,唤醒快捷中心的操作控件,如图12所示。再通过手柄操作点击录屏按钮,使录屏服务初始化,即录屏服务处于ready状态。
虚拟现实设备500在用户点击录屏按钮后,会自动唤醒计时控件,开启倒计时,并在倒计时结束后开始录屏,即录屏服务处于running状态。同时在开始录屏以后,计时控件开始正计时且始终显示在场景最上层,以提醒用户已录制的时长,如图13所示。
在用户想要结束录屏时,可再次双击手柄控制器的Home按键,唤醒快捷中心的操作控件。使虚拟现实设备500控制暂停录屏服务,即录屏服务处于pause状态。计时控件也暂停计时,可依然保持显示在最上层,如图14所示。
显示快捷中心窗口期间,如果用户此时想要继续录屏,则可以点击手柄上的返回(Return)按键,退出快捷中心窗口。退出快捷中心窗口后,录屏服务继续运行,即录屏服务再次处于running状态,再将快捷中心控件隐藏,计时控件继续计时。如果用户此时想要结束录屏,则点击录屏按钮,停止录屏服务,如图15所示。再将快捷中心控件隐藏,计时控件也停止计时并隐藏。
可见,上述实施例中提供的录屏交互方式,可以缓解录屏过程中,录屏相关控制动作的画面遮挡用户界面,以获得更好的录屏视频文件。并且录屏功能的操作简单快 捷,能快速响应并实时反馈信息。
由于上述实施例可以在录屏过程中唤醒录屏控制界面时暂停录屏服务,以显示录屏控制界面,并且用户通过录屏控制界面所执行操作不会在瞬间完成,因此上述录屏交互方法会在部分录屏过程中,丢失用户输入交互指令时段的画面内容。例如,当虚拟现实设备500播放视频媒资并执行录屏时,由于在输入交互指令期间录屏服务被暂停,而视频仍在播放中,因此最终生成的视频将不会包含输入交互指令期间所播放的媒资画面内容。为此,如图16所示,在一些实施例中,获取用户输入的用于唤醒录屏控制界面的控制指令的步骤后,所述控制器被进一步配置为:
S101:检测所述显示器当前显示的界面类型;
S102:如果所述界面类型为媒资播放界面,在获取所述控制指令后暂停媒资播放进程;
S103:在执行用户通过所述录屏控制界面输入的交互指令后,继续媒资播放进程。
本实施例中,虚拟现实设备500可以在用户输入唤醒录屏控制界面的控制指令后,对当前显示的界面类型进行检测,以确定是否显示媒资播放界面。如果界面类型为媒资播放界面,则在获取控制指令后暂停媒资播放进程,即媒资播放进程与录屏服务均被暂停,并且在执行用户通过录屏控制界面输入的交互指令后,继续媒资播放进程,从而避免因录屏服务被暂停,录屏文件中缺少交互指令期间所播放的媒资画面内容的问题。
例如,用户通过媒资列表选择任一媒资进行播放后,虚拟现实设备500可以呈现媒资播放界面。用户再通过快捷中心点击开始录屏选项,启动运行录屏服务,从而对所播放的媒资进行录制。当用户在媒资播放至0:17:03时刻再次双击手柄上的home键,唤醒快捷中心界面时,媒资播放进程也在0:17:03时刻暂停,并在用户完成交互操作后,继续从0:17:03时刻播放媒资,从而缓解录屏画面部分缺失的问题。
基于上述虚拟现实设备500,在本申请的部分实施例还提供的录屏交互方法,该录屏交互方法可应用于虚拟现实设备500,以实现屏幕录制。所述录屏交互方法包括以下步骤:
S1:获取用户输入的用于唤醒录屏控制界面的控制指令;
S2:响应于所述控制指令,如果所述虚拟现实设备处于录屏过程中,暂停运行录屏服务,以及在暂停录屏服务后控制所述显示器显示所述录屏控制界面;
S3:执行用户通过所述录屏控制界面输入的交互指令,当所述交互指令通过所述结束录制选项输入时,结束运行录屏服务,以使生成的录屏文件中不包含暂停运行录屏服务期间的用户界面。
由以上技术方案可知,上述实施例中提供的虚拟现实设备及录屏交互方法可以在用户唤醒录屏控制界面时,暂停运行录屏服务,并显示录屏控制界面,以供用户通过录屏控制界面输入交互指令,对录屏过程进行控制。所述录屏交互方法可以通过暂停运行录屏服务,即暂停对当前屏幕显示内容的录制,使录屏获得的视频文件中不包含录屏控制界面对应的画面,缓解录屏控制界面遮挡画面内容。
基于上述虚拟现实设备500的录屏功能,虚拟现实设备500在开始录屏后,可以实时保存所显示的内容,直到录屏结束。录屏操作获得的视频画面也随着用户的继续使用而跟随用户的交互动作变化。由于录屏过程通常可以持续一定的时间,因此在进 行录屏时,虚拟现实设备500上显示的画面可以随着用户佩戴动作发生改变。例如,用户在佩戴虚拟现实设备500的过程中,如果转动头部,则可以调整观看视角,使虚拟现实设备500显示新的视角下的用户界面,此时录屏获得的视频画面也会从初始视角下的用户界面对应画面变成新的视角下的用户界面对应画面。
即姿态传感器可以检测用户的运动过程,以生成用户姿态数据。再将生成的用户姿态数据传输给控制器,以使控制器可以根据用户姿态数据调整左显示器和右显示器中的画面内容。但是,由于虚拟现实设备500需要佩戴在用户的头部,因此在佩戴过程中,当在录屏过程中用户头部发生无意识的轻微运动时,姿态传感器也会将无意识的轻微运动进行检测,并触发控制器调整显示的VR画面内容。这些无意识的轻微运动将会导致录屏获得视频画面出现频繁晃动,降低录屏输出画面的画面质量。
为了提高录屏质量,在本申请的部分实施例中,提供的防抖动录屏方法,该方法可应用于虚拟现实设备500中。所述虚拟现实设备500包括显示器、姿态传感器以及控制器。为了实施该防抖动录屏方法,如图17所示,虚拟现实设备500的控制器可以被配置为执行以下程序步骤:
S4:接收用户输入的用于开始录屏的控制指令。
S5:响应于所述控制指令,对所述用户姿态数据执行平滑处理。
虚拟现实设备500可以在接收到用户输入的控制指令后,开始执行录屏功能,即按照设定的录屏参数保存开始录屏后虚拟现实设备500对应的画面内容。同时,虚拟现实设备500可以通过姿态传感器实时监测用户姿态数据。即通过重力加速度传感器和陀螺仪的传感器设备检测用户的头部摆动动作。
为了缓解录屏过程中的抖动,在执行录屏功能的同时,还可以对姿态传感器检测的姿态数据执行平滑处理。所述平滑处理是通过滤波算法,将姿态传感器检测的数据进行滤波,去除姿态数据中的瞬间波动。
例如,虚拟现实设备500可以在获取姿态传感器检测的用户姿态数据后,从用户姿态数据中提取姿态传感器检测的角度在x轴、y轴和z轴上的分量,以及提取输出前一帧录屏图像画面时的姿态数据,显然提取的前一帧录屏图像画面对应的姿态数据也可以为角度在x轴、y轴和z轴上的分量。再根据用户姿态数据和输出前一帧录屏图像画面时的姿态数据,计算等效姿态数据。即等效姿态数据:
X K=X k-1+(XD M-X k-1)/(T M-T k-1)×c×(T k-T k-1);
Y K=Y k-1+(YD M-Y k-1)/(T M-T k-1)×c×(T k-T k-1);
Z K=Z k-1+(ZD M-Z k-1)/(T M-T k-1)×c×(T k-T k-1);
其中,X K、Y K、Z K为输出第k帧录屏图像画面时在X轴、Y轴和Z轴方向上的角度;X K-1、Y K-1、Z K-1为输出第k-1帧录屏图像画面时在X轴、Y轴和Z轴方向上的角度;XD M、YD M、YD M为T k时间姿态传感器在X轴、Y轴和Z轴方向上检测的角度数据;T M为姿态传感器数据上报XD M、YD M、YD M数据的时间;T k为第k帧的时间;T k-1为第k-1帧的时间;c为介于0-1之间为经验值常数。
根据上述平滑处理方式,可以在获取姿态传感器检测的姿态数据XD M、YD M、YD M后,通过提取前一帧录屏图像画面对应的姿态数据X K-1、Y K-1、Z K-1并结合姿态数据的上报时间和两帧图像间的间隔时间,计算出等效姿态数据。可见,通过参考前一帧录屏图像画面对应的姿态数据以及相关时间参数,使虚拟现实设备500对图像画面的 调整过程区域平缓,从而减轻最终画面中的抖动。
S6:根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
在对用户姿态数据执行平滑处理后,虚拟现实设备500可以根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像。其中,渲染场景是指由虚拟现实设备500渲染引擎通过渲染程序构建的一个虚拟场景。例如,基于unity 3D渲染引擎的虚拟现实设备500,可以在呈现显示画面时,构建一个unity 3D场景。在unity 3D场景中,可以添加各种虚拟物体和功能控件,以渲染出特定的使用场景。如在播放多媒体资源时,可以在unity 3D场景中添加一个显示面板,该显示面板用于呈现多媒体资源画面。同时,还可以在unity 3D场景中添加座椅、音响、人物等虚拟物体模型,从而营造出影院效果。
为了输出渲染后的画面,虚拟现实设备500还可以在unity 3D场景中设置虚拟相机。例如,虚拟现实设备500可以按照用户双眼的位置关系,在unity 3D场景中设置左眼相机和右眼相机,两个虚拟相机可以同时对unity 3D场景中的物体进行拍摄,从而向左显示器和右显示器分别输出渲染画面。为了获得更好的沉浸感体验,两个虚拟相机在unity 3D场景中的角度可以随着虚拟现实设备500的姿态传感器实时调整,从而在用户佩戴虚拟现实设备500行动时,可以实时输出不同观看角度下的unity 3D场景中的渲染画面。
基于此,虚拟现实设备500可以通过渲染场景获取多帧录屏图像画面,从而生成录屏视频文件。例如,虚拟现实设备500在开始录屏后,可以获取左眼相机和/或右眼相机拍摄的图像,并将图像进行复制,从而输出录屏图像画面。还可以在渲染场景中设置专门用于录屏的虚拟相机,即虚拟录屏相机,从而在开始录屏后,通过虚拟录屏相机获取拍摄到的图像画面,并输出为录屏图像。
由于虚拟相机可以被配置为跟随姿态传感器检测的姿态数据调整拍摄角度,因此在姿态数据被执行平滑处理后,虚拟现实设备500在渲染场景中拍摄获得的图像内容变化也趋于平缓,达到防抖动的效果。
在一些实施例中,为了对用户姿态数据执行平滑处理,虚拟现实设备500可以在获取用户输入的录屏指令后,通过设置虚拟录屏相机,并控制虚拟录屏相机的拍摄参数,使虚拟录屏相机能够输出平稳的录屏图像,即如图18、图19所示,控制器可被进一步配置为执行以下程序步骤:
S410:在渲染场景中加载虚拟录屏相机;
S420:在接收到所述控制指令后,启动所述虚拟录屏相机;
S430:根据平滑处理后的用户姿态数据,设置所述虚拟录屏相机的拍摄角度,以对所述渲染场景执行图像拍摄。
本实施例中,所述虚拟录屏相机是一种依赖于渲染场景的软件程序,用于对渲染场景进行拍摄,以获得录屏图像。虚拟录屏相机可以是独立于左眼相机和右眼相机而设置的一个中间相机,在用户使用虚拟现实设备500时,可以随应用加载至渲染场景中,以便在使用录屏功能时启用。
即,当用户未使用录屏功能时,虚拟录屏相机可以不对渲染场景执行拍摄,即处于休眠状态,不会输出录屏图像。当用户使用录屏功能时,用户会通过交互动作输入 控制指令。此时,虚拟现实设备500可以在接收到控制指令后,启动虚拟录屏相机,开始对渲染场景进行图像拍摄,并输出录屏视频图像,实现录屏功能。
在启用虚拟录屏相机以后,虚拟录屏相机可以同左眼相机或右眼相机的图像拍摄方式,渲染场景中执行图像拍摄。并且实时接收姿态传感器检测的用户姿态数据,以及根据用户姿态数据,调整拍摄角度。为了防止录屏时的抖动,用户姿态数据可以先经过平滑处理后,再输入虚拟录屏相机。从而根据平滑处理后的用户姿态数据,设置所述虚拟录屏相机的拍摄角度,以对所述渲染场景执行图像拍摄。
同理,为了使用户能够观看到虚拟现实画面,在一些实施例中,虚拟现实设备500还可以在用户使用虚拟现实设备500时,在渲染场景中加载虚拟显示相机。其中,所述虚拟显示相机包括左眼相机和右眼相机,并将虚拟录屏相机设置在左眼相机和右眼相机之间的中部位置。
使用过程中,左眼相机可以模拟用户的左侧眼睛在渲染场景中拍摄左眼图像;右眼相机则以模拟用户的右侧眼睛在渲染场景中拍摄右眼图像,而虚拟录屏相机则对渲染场景进行图像拍摄,以获取录屏图像。由于虚拟录屏相机设置在左眼相机和右眼相机之间的中部位置上,因此虚拟录屏相机输出的录屏图像为更接近于用户直接看到的显示画面内容。
在启用渲染场景中的左眼相机和右眼相机以后,可以根据未平滑处理的用户姿态数据,设置左眼相机和右眼相机的拍摄角度,并通过左眼相机和右眼相机对渲染场景执行图像拍摄。可见,在本实施例中,姿态传感器检测的姿态数据可以复制为两份,其中一份执行平滑处理,并将处理后的姿态数据发送给虚拟录屏相机;另一份不执行平滑处理在,直接发送到左眼相机和右眼相机。
如图20所示,在一些实施例中,虚拟现实设备500还可以按照用户指定的形式输出录屏视频文件,即所述控制器被进一步配置为执行以下程序步骤:
S510:控制所述显示器显示录屏参数设置界面;
S520:接收用户通过所述录屏参数设置界面输入的录屏参数;
S530:按照所述录屏图像尺寸设置所述虚拟录屏相机的拍摄范围;
S540:按照所述录屏帧率设置所述虚拟录屏相机的录屏图像画面输出帧率。
为了指定录屏视频文件的输出形式,虚拟现实设备500可以在使用中显示录屏参数设置界面,用户可以通过用户参数界面执行交互动作输入录屏参数。例如,所述录屏参数包括录屏图像尺寸和录屏帧率,用户可以通过录屏参数设置界面上的文本输入框,输入录屏图像宽度1920和高度1080,以及通过拖动滚动条的方式,设置视频文件的帧率为60Hz,则可以控制虚拟现实设备500输出1920×1080画面尺寸、帧率为60Hz的录屏视频。
在用户输入录屏参数以后,虚拟现实设备500还可以根据用户输入的录屏参数对渲染场景中虚拟录屏相机的拍摄方式进行设置,拍摄方式设置可以包括两个方面,一方面为按照录屏图像尺寸设置虚拟录屏相机的拍摄范围,另一方面为按照录屏帧率设置虚拟录屏相机的录屏图像画面输出帧率。其中,拍摄范围可以通过调整虚拟录屏相机的位置、焦距等参数实现,使渲染场景中的主要内容影像充满录屏图像,从而获得更清晰的录屏图像画面。输出帧率即单位时间内,虚拟录屏相机所拍摄的图像数,输出帧率越高,则最终生成的视频画面越流畅,相应的生成文件时的数据处理量也越大。 因此,输出帧率应控制在一个合理的范围内,如30Hz-120Hz。
需要说明的是,用户还可以通过录屏参数设置界面输入其他类型的录屏参数,例如,色彩范围、压缩方式、编码格式等。虚拟现实设备500可以按照设备所支持的录屏输出方式在录屏参数设置界面上设置相应的控制选项,以供用户选择输入。并且在用户输入录屏参数后,虚拟现实设备500则按照录屏参数指定的输出方式进行录屏操作,从而实现按照用户指定的形式输出录屏视频文件。即虚拟现实设备500可以逐帧提取在渲染场景中拍摄的录屏图像,并对多帧录屏图像执行编码,以生成录屏视频文件,最后存储或发送所述录屏视频文件。
在上述实施例中,虚拟现实设备500通过对用户姿态数据执行平滑处理,过滤录屏过程中的抖动影响。实际应用中,还可以通过对用户姿态数据的具体数值进行判断,从而在用户姿态数据变化较小时,通过锁定虚拟相机的拍摄角度,实现防抖效果。即在本申请的部分实施例中,还提供的虚拟现实设备500,包括:显示器、姿态传感器以及控制器。其中,显示器被配置为显示用户界面,姿态传感器被配置为实时检测用户姿态数据;如图21所示,控制器被配置为执行以下程序步骤:
S610:接收用户输入的用于开始录屏的控制指令;
S620:响应于所述控制指令,通过所述姿态传感器获取用户姿态数据;
S630:根据所述用户姿态数据计算姿态变化量;
S640:根据所述姿态变化量从渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
与上述实施例相同,本实施例中虚拟现实设备500在接收到用户输入的录屏控制指令后,可以响应于该控制指令,通过姿态传感器获取用户姿态数据。但不同点在于,本实施例在获取用户姿态数据后,可以直接读取用户姿态数据中检测的角度数值,并根据角度数值计算姿态变化量。
其中,所述姿态变化量为用户姿态数据与前一帧录屏图像时的姿态数据的差值。例如,用户姿态数据为当前时刻倾斜角度在空间直角坐标系中的三轴分量,即θ x、θ y以及θ z;而前一帧录屏图像时对应的姿态数据同样为前一帧时刻倾斜角度在空间直角坐标系中的三轴分量,即θ x0、θ y0以及θ z0,因此,可以计算姿态变化量为(θ xx0),(θ yy0)以及(θ zz0)。
在计算姿态变化量以后,虚拟现实设备500可以对姿态变化量的具体数值进行判断,并在渲染场景中拍摄录屏图像,从而在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。通过对姿态变化量进行判断,虚拟现实设备500可以去除因抖动引起的姿态变化,从而在姿态变化量较小时,通过控制虚拟相机不改变拍摄角度,输出稳定拍摄角度的录屏图像。
例如,在计算得到姿态变化量以后,虚拟现实设备500可以对比姿态变化量与预设抖动阈值;如果姿态变化量小于或等于预设抖动阈值,则说明当前姿态变化量较小,很有可能是由于抖动造成,因此可以按照输出前一帧录屏图像时的姿态数据从渲染场景中拍摄录屏图像,以输出稳定拍摄角度的录屏图像画面;如果姿态变化量大于预设抖动阈值,则说明当前姿态变化量较大,姿态变化量是由用户在佩戴过程中的主动动作造成,因此可以按照获取的用户姿态数据从渲染场景中拍摄录屏图像。
需要说明的是,在本实施例中,虚拟现实设备500同样可以通过在渲染场景中加 载虚拟录屏相机的方式实现对渲染场景的图像拍摄。并在执行图像拍摄的过程中,当姿态变化量小于预设抖动阈值时,不向虚拟录屏相机输入当前姿态数据,使虚拟录屏相机可以在前一帧姿态数据对应的拍摄角度下,完成当前帧图像拍摄,以输出稳定拍摄角度的录屏图像画面。当姿态变化量大于预设抖动阈值时,则直接将当期姿态数据发送给虚拟录屏相机,从而使虚拟录屏相机可以根据当前姿态数据调整拍摄角度,从而获取新视角下的录屏图像画面。
为了减少数据处理量,在部分实施例中,还可以间帧对比姿态数据,例如,虚拟现实设备500可以每间隔5帧获取一次姿态数据,并对不同获取次序之间的姿态变化量进行计算,从而在判断是否姿态变化量是否小于预设抖动阈值。
由以上技术方案可知,上述实施例提供的虚拟现实设备500及防抖动录屏方法,可以在用户输入开始录屏的控制指令后,获取用户姿态数据,并根据用户姿态数据计算姿态变化量。当姿态变化量小于预设抖动阈值时,可以按照输出前一帧录屏图像时的姿态数据输出稳定拍摄角度的录屏图像画面,缓解录屏时抖动造成的影响。
在另一些实施例中,为了提高录屏输出画面的画面质量,在本申请的部分实施例中提供的静态录屏方法,所述静态录屏方法可以应用于虚拟现实设备500,也可以应用于带有相同功能硬件的增强现实设备、可穿戴设备、VR游戏设备等头戴设备。
如图22、图23所示,在用户输入用于开始录屏的控制指令后,虚拟现实设备500可以响应于所述控制指令,获取初始姿态数据和实时姿态数据。其中,所述初始姿态数据为在控制指令的输入时刻记录的用户姿态数据;所述实时姿态数据为在接收控制指令后通过姿态传感器持续检测的用户姿态数据。
例如,当用户通过快捷中心窗口点击“录屏”选项后,虚拟现实设备500可以启动录屏相关程序。通过执行录屏相关程序,虚拟现实设备500可以通过姿态传感器检测当前的用户姿态数据,并对用户输入控制指令时刻的用户姿态数据进行记录,作为初始姿态数据。开始录屏后,用户可以继续在佩戴虚拟现实设备500的状态下,从事有意识或无意识的动作。此时,姿态传感器会将用户动作引起的姿态数据变化进行检测,以用于调整显示的VR画面。因此,在用户输入控制指令后,虚拟现实设备500还可以逐帧检测用户姿态数据,获得实时姿态数据。
检测初始姿态数据和实时姿态数据后,虚拟现实设备500可以根据检测的数据内容进行计算,以计算出动作判断量。其中,所述动作判断量为实时姿态数据与初始姿态数据角度差大于预设角度阈值的累积时间。
虚拟现实设备500在获取初始姿态数据和实时姿态数据后,可以先根据实时姿态数据与初始姿态数据进行角度差计算,即计算出用户动作过程所对应的视角方向变化量。在一些实施例中,为了计算出实时姿态数据与初始姿态数据之间的角度差,虚拟现实设备500可以在计算动作判断量的过程中,先提取初始姿态数据的方向四元数A(w 0,x 0,y 0,z 0)以及提取实时姿态数据中的方向四元数B(w,x,y,z)。再对初始姿态数据的方向四元数A取逆运算,得到逆矩阵A -1,即A -1=(w 0,x 0,y 0,z 0) -1。计算实时姿态数据中的方向四元数B与逆矩阵A -1的乘积,以获得姿态差C,在计算获得姿态差C以后,还可以对姿态差C对应的四元数进行归一化处理,从而根据姿态差C计算角度差。其中,角度差θ按照下式计算获得:
θ=arccos(C·w)*2;
式中,θ表示角度差;C表示姿态差,即C=B*A -1;w表示实时姿态数据对应四元数B(w,x,y,z)中的第一元素值。
经过上述计算,可以获得角度差θ,由于上述角度差通过弧度表示,与用户习惯上的角度表示方式不相符,因此在计算获得角度差θ后,虚拟现实设备500还可以将弧度表示转换为角度表示,即角度差α=θ*180/π。
由于在实际佩戴过程中,用户无意识的动作一般幅度较小,因此计算获得角度差后,虚拟现实设备500可以针对计算获得的角度差进行判断,确定角度差是否超过预先设置的角度判断阈值。即虚拟现实设备500可以对比角度差α与预设角度阈值α 0。如果角度差大于预设角度阈值,即α>α 0,则当前实时姿态数据对应的用户动作很有可能为用户的无意识动作。同理,如果角度差小于或等于预设角度阈值,即α≤α 0,则当前实时姿态数据对应的用户动作可能为用户的主动动作。
再分别判断多帧实时姿态数据对应的角度差是否超出预设角度阈值,从而确定角度差超出预设角度阈值状态所持续的时间,即动作判断量。如果动作判断量小于或等于预设判断阈值,即角度差超出预设角度阈值状态所持续的时间较短,在此次检测周期内的用户姿态变化为用户无意识动作导致,因此虚拟现实设备500可以锁定录屏视角,即在此期间使用初始姿态数据设置录屏视角方向,以生成稳定状态的录屏数据。
同理,如果动作判断量大于预设判断阈值,即角度差超出预设角度阈值状态所持续的时间较长,在此次检测周期内的用户姿态变化为用户的主动动作导致,因此虚拟现实设备500可以跟随用户动作,将录屏视角方向拉齐到新角度,即使用实时姿态数据更新录屏视角方向。
例如,虚拟现实设备500可以设定角度阈值为20°,设定判断阈值为0.2s,即用户动作导致的角度差超过20°的状态持续0.2s以上,认定当前实时姿态数据由用户的主动动作导致。因此,在计算确定角度差超过20°的状态持续0.2s以上后,虚拟现实设备500可以将录屏角度拉齐到新的角度,即最新一帧实时姿态数据对应的角度,继续进行录屏。而在计算确定角度差始终未超过20°,或者角度差超过20°的状态未持续0.2s以上后,虚拟现实设备500可以始终保持在初始角度进行录屏操作,即按照用户输入控制指令时的初始姿态数据对应的角度设置录屏方向。
需要说明的是,虚拟现实设备500在确定动作判断量大于预设判断阈值,并使用实时姿态数据更新录屏视角方向以后,虚拟现实设备500进入下一个检测周期,即虚拟现实设备500可以将用于更新录屏视角方向的实时姿态数据(w n,x n,y n,z n)作为下一个检测周期的初始姿态数据,并在下一个检测周期内,使用新检测的实时姿态数据(w n+1,x n+1,y n+1,z n+1)、(w n+2,x n+2,y n+2,z n+2)……与下一个检测周期内的初始姿态数据(w n,x n,y n,z n)进行角度差以及角度差超出角度阈值状态累积时间的计算。依次类推,虚拟现实设备500就可以通过多个检测周期,在维持录屏输出画面的稳定的前提下,不断拉齐用户动作对应的角度,完成录屏操作。
由以上技术方案可知,上述实施例中提供的静态录屏方法能够综合角度变化和时间维持的状态,精确判断用户姿态数据的变化原因,智能锁定或解锁录屏视角,使输出的视频画面更加稳定。
为了满足不同用户对录屏效果的不同需求,在一些实施例中,静态录屏功能可以作为一种录屏模式,并设置在特定的控制界面中供用户选择。例如,如图24所示,虚 拟现实设备500的录屏控制界面中可以设置“静态录屏”选项和“动态录屏”选项。用户可以通过点击不同的选项,设置录屏过程所采用的录屏方式。为此,如图25所示,虚拟现实设备500可以在用户输入用于开始录屏的控制指令后,响应于控制指令,解析控制指令中指定的录屏方式。
其中,所述录屏方式可以包括静态录屏和动态录屏两种方式。即当用户在录屏设置界面中选中静态录屏选项时,其输入的控制指令中指定的录屏方式为静态录屏;同理,当用户在录屏设置界面中选中动态录屏选项时,其输入的控制指令中指定的录屏方式为动态录屏。
虚拟现实设备500可以针对不同的录屏方式,执行不同的录屏过程。即如果录屏方式为静态录屏,则执行获取初始姿态数据和实时姿态数据的步骤,并按照上述实施例中提供的方式,根据初始姿态数据和实时姿态数据计算动作判断量,并根据不同的动作判断量设置不同的录屏方式。如果录屏方式为动态录屏,则虚拟现实设备500可以按照常规的录屏方式,通过姿态传感器检测实时姿态数据,并使用实时姿态数据设置录屏视角方向,取消录屏过程中虚拟现实设备500对录屏视角的锁定,生成与用户观看内容相同的录屏数据。
可见,在上述实施例中,虚拟现实设备500可以通过提供录屏设置界面,使用户能够基于录屏设置界面选择静态录屏或动态录屏方式,使虚拟现实设备500能够满足不同用户的个性化需求。
虚拟现实设备500在进行录屏过程中,可以根据VR画面实时生成录屏画面并输出录屏数据。录屏画面对应的画面内容可以与虚拟现实设备500显示的画面相同,例如,录屏画面可以直接复用左显示器或右显示器显示的画面内容。但由于静态录屏方式需要在用户无意识的小幅度、短暂动作时对录屏角度进行锁定,而为了提供更好的沉浸感体验,左显示器和右显示器所显示的画面是实时跟随用户姿态变换视角的,因此没有频繁抖动的录屏画面内容会与用户实际观看到的画面内容存在着部分差别。
为了满足静态录屏需要,在一些实施例中,虚拟现实设备500可以设置独立的、专门用于录屏操作的虚拟录屏相机,并通过虚拟录屏相机对渲染场景进行图像拍摄,以生成录屏数据。即在获取初始姿态数据和实时姿态数据的步骤中,虚拟现实设备500可以先在渲染场景中添加虚拟录屏相机。
在一些实施例中,为了输出录屏数据,虚拟现实设备500还可以设置录屏数据的输出帧率,并按照设置的输出帧率对当前渲染场景拍摄多个连续帧图像,以生成录屏数据。使用独立于左眼相机和右眼相机的虚拟录屏相机进行录屏,可以减少录屏过程对用户正常观影过程的干扰,并且由于虚拟录屏相机可以在较长的阶段中维持在固定的拍摄角度,因此虚拟现实设备500可以保证录屏视频内容不会频繁晃动,提供给用户更好的观感。
上述实施例中,虚拟现实设备500对用户动作的判断主要依据动作判断量,即实时姿态数据与初始姿态数据的角度差大于预设角度阈值状态的累积时间。由于姿态传感器的采样帧率一般是固定不变的,因此虚拟现实设备500在用户佩戴后,可以按照采样帧率向控制器反馈采集到的用户姿态数据。例如,姿态传感器的采样帧率为60FPS(帧/秒),即在1s时间内,姿态传感器可以向显示设备200反馈60帧实时姿态数据,相邻两帧实时姿态数据之间的时间间隔为固定的0.016s。基于此,虚拟现实设备500 在计算动作判断量时,可以通过累积帧数确定角度差大于预设角度阈值状态的累积时间。
即如图26所示,为了计算动作判断量,在一些实施例中,虚拟现实设备500可以在计算动作判断量的步骤中,依次获取多帧实时姿态数据,并计算每一帧实时姿态数据与初始姿态数据的角度差,再记录角度差大于预设角度阈值的连续累积帧数,以通过记录的连续累积帧数确定角度差大于预设角度阈值状态的累积时间。如果连续累积帧数大于预设累积帧数阈值,即可确定动作判断量大于预设判断阈值;如果连续累积帧数小于或等于预设累积帧数阈值,确定动作判断量小于或等于预设判断阈值。
例如,针对60FPS的姿态传感器,虚拟现实设备500可以设置累积帧数阈值为100帧,对应角度差大于预设角度阈值状态的累积时间约1.5s。因此,虚拟现实设备500在确定一帧实时姿态数据与初始姿态数据的角度差大于预设角度阈值,即α 1>α 0后,可以针对该帧实时姿态数据以后的每一帧用户姿态数据进行判断,如果在一个检测周期内,累积有连续的100帧实时姿态数据与初始姿态数据的角度差大于预设角度阈值,即α 2>α 0、α 3>α 0、α 4>α 0……则确定角度差大于预设角度阈值状态的累积时间超过1.5s,即动作判断量大于预设判断阈值。
同理,如果在一个检测周期内,出现任一帧实时姿态数据与初始姿态数据的角度差小于或等于预设角度阈值,如α 10≤α 0,则确定角度差大于预设角度阈值状态的累积时间未超过1.5s,即动作判断量小于或等于预设判断阈值。
为了便于虚拟现实设备500执行对多个连续帧实时姿态数据记性判断,如图27所示,在一些实施例中,虚拟现实设备500还可以在记录角度差大于预设判断阈值的连续累积帧数的步骤中,创建计数变量N,用于保存角度差大于预设角度阈值状态的累积帧数。
再分别对每帧进行判断处理,如果角度差大于预设角度阈值,将计数变量的累积数量加一,即N=N 0+1;如果角度差小于或等于预设角度阈值,将计数变量的累积数量清零,即N=0。例如,虚拟现实设备500在检测到每一帧实时姿态数据更新时,通过方向对比,判断当前实时姿态数据对应的实际方向与初始姿态数据对应的录屏方向是否偏离20度以上。如果实际方向与录屏方向偏离20度以上,则将计数变量N加1,直至计数变量N所代表的连续累积帧数计满100次,即时间大约1.5s以后,更新录屏方向,然后N清零重新计数。如果实际方向与录屏方向未偏离20度以上,可直接将计数变量N清零,并开始下一周期的检测过程。
在上述实施例中,虚拟现实设备500可以对每一帧实时姿态数据均进行角度差判断,并依据逐帧判断结果确定是否需要调整录屏角度。这样的判断方式精度较高,但虚拟现实设备500需要逐帧进行分析计算,使得判断过程需要占用的运算资源较多,增加虚拟现实设备500的处理器负荷。如图28所示,为了减轻处理器负荷,在一些实施例中,虚拟现实设备500还可以在计算动作判断量的步骤中,先对比角度差与预设角度阈值,并记录角度差大于预设角度阈值时的实时姿态数据的获取时间。
例如,当判断实时姿态数据(w 1,x 1,y 1,z 1)与初始姿态数据(w 0,x 0,y 0,z 0)的角度差α 1大于预设角度阈值α 0时,虚拟现实设备500可以记录该实时姿态数据(w 1,x 1,y 1,z 1)对应的获取时间,即:16:00:01:000。
记录获取时间后,虚拟现实设备500可以开始一个新的检测周期,并在获取时间 后的预设检测周期内,间隔提取多帧用户姿态数据。多帧用户姿态数据的提取间隔,可以根据时间确定也可以根据帧数确定。例如,虚拟现实设备500可以在1.5s的检测周期内,每隔0.1s提取一帧实时姿态数据。或者虚拟现实设备500可在1.5s的检测周期内,每隔10帧提取一帧实时姿态数据。
根据间隔提取的多帧用户姿态数据,虚拟现实设备500可以根据对提取的多帧用户姿态数据再执行判断,确定多帧用户姿态数据与初始姿态数据的角度差是否超出预设角度阈值。如果提取的多帧用户姿态数据与初始姿态数据的角度差均大于预设角度阈值,则可以确定动作判断量大于预设判断阈值;如果提取的任一帧用户姿态数据与初始姿态数据的角度差均小于或等于预设角度阈值,则可以确定动作判断量小于或等于预设判断阈值。
可见,通过在预设检测周期内间隔提取多帧用户姿态数据,可以在满足基本的动作判断精度的前提下,大大减少虚拟现实设备500的数据处理量,并通过控制提取的间隔时间或间隔帧数,满足不同的用户需求。例如,当用户需要判断精度较高时,可以缩短间隔时间或减少间隔帧数,以增大样本数量,使录屏角度能够及时拉齐实际角度。而当用户需要减轻处理负荷时,可以延长间隔时间或增大间隔帧数,以减小样本数量,降低数据处理量。
需要说明的是,为了均衡判断精度和处理负荷,虚拟现实设备500中还可以设有负荷检测模块,如MCU监测模块、内存检测模块、温度监测模块等。通过负荷检测模块可以检测出虚拟现实设备500当前的负荷状态,并根据当前的负荷状态动态设置间隔时间或间隔帧数。
由以上技术方案可知,本申请提供的虚拟现实设备500可以通过控制器执行静态录屏方法,使虚拟现实设备500可以在用户输入开始录屏的控制指令后,对初始姿态数据和实时姿态数据进行检测,并计算实时姿态数据相对于初始姿态数据的动作判断量,其中,动作判断量为实时姿态数据与初始姿态数据角度差大于预设角度阈值的累积时间。当动作判断量未超出预设判断阈值时,可确定用户姿态数据的变化是由用户无意识的小动作引起,则可以仍旧使用初始姿态数据设置录屏视角方向,以生成包含稳定的视频内容的录屏数据;而当动作判断量超出预设判断阈值时,则确定用户姿态数据的变化是由用户的主动动作引起,因此可以使用实时姿态数据更新录屏视角方向。虚拟现实设备500方法可以综合角度变化和时间维持的状态,精确判断用户姿态数据的变化原因,智能锁定或解锁录屏视角,使输出的视频画面更加稳定。
本申请提供的实施例之间的相似部分相互参见即可,以上提供的具体实施方式只是本申请总的构思下的几个示例,并不构成本申请保护范围的限定。对于本领域的技术人员而言,在不付出创造性劳动的前提下依据本申请方案所扩展出的任何其他实施方式都属于本申请的保护范围。

Claims (10)

  1. 一种虚拟现实设备,其特征在于,包括:
    显示器;
    姿态传感器,被配置为实时检测用户姿态数据;
    控制器,被配置为:
    接收用户输入的用于开始录屏的控制指令;
    响应于所述控制指令,对所述用户姿态数据执行平滑处理;
    根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
  2. 根据权利要求1所述的虚拟现实设备,其特征在于,所述控制器被进一步配置为:
    在渲染场景中加载虚拟录屏相机;
    在接收到所述控制指令后,启动所述虚拟录屏相机;
    根据平滑处理后的用户姿态数据,设置所述虚拟录屏相机的拍摄角度,以对所述渲染场景执行图像拍摄。
  3. 根据权利要求2所述的虚拟现实设备,其特征在于,所述控制器被进一步配置为:
    在渲染场景中加载虚拟显示相机,所述虚拟显示相机包括左眼相机和右眼相机,所述虚拟录屏相机设置在所述左眼相机和所述右眼相机之间的中部位置;
    根据未平滑处理的用户姿态数据,设置所述左眼相机和右眼相机的拍摄角度;
    通过所述左眼相机和右眼相机对所述渲染场景执行图像拍摄。
  4. 根据权利要求2所述的虚拟现实设备,其特征在于,所述控制器被进一步配置为:
    控制所述显示器显示录屏参数设置界面;
    接收用户通过所述录屏参数设置界面输入的录屏参数;所述录屏参数包括录屏图像尺寸和录屏帧率;
    按照所述录屏图像尺寸设置所述虚拟录屏相机的拍摄范围;
    按照所述录屏帧率设置所述虚拟录屏相机的录屏图像画面输出帧率。
  5. 根据权利要求1所述的虚拟现实设备,其特征在于,对所述用户姿态数据执行平滑处理的步骤中,所述控制器被进一步配置为:
    获取所述姿态传感器检测的用户姿态数据,所述用户姿态数据包括姿态传感器检测的角度在x轴、y轴和z轴上的分量;
    提取输出前一帧录屏图像画面时的姿态数据;
    根据所述用户姿态数据和输出前一帧录屏图像画面时的姿态数据,计算等效姿态数据。
  6. 根据权利要求5所述的虚拟现实设备,其特征在于,所述控制器按照下式计算所述等效姿态数据:
    XK=Xk-1+(XDM-Xk-1)/(TM-Tk-1)×c×(Tk-Tk-1);
    YK=Yk-1+(YDM-Yk-1)/(TM-Tk-1)×c×(Tk-Tk-1);
    ZK=Zk-1+(ZDM-Zk-1)/(TM-Tk-1)×c×(Tk-Tk-1);
    其中,XK、YK、ZK为输出第k帧录屏图像画面时在X轴、Y轴和Z轴方向上的角度;XK-1、YK-1、ZK-1为输出第k-1帧录屏图像画面时在X轴、Y轴和Z轴方向上的角度;XDM、YDM、YDM为Tk时间姿态传感器在X轴、Y轴和Z轴方向上检测的角度数据;TM为姿态传感器数据上报XDM、YDM、YDM数据的时间;Tk为第k帧的时间;Tk-1为第k-1帧的时间;c为介于0-1之间为经验值常数。
  7. 根据权利要求1所述的虚拟现实设备,其特征在于,根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像的步骤中,所述控制器被进一步配置为:
    逐帧提取在渲染场景中拍摄的录屏图像;
    对多帧录屏图像执行编码,以生成录屏视频文件;
    存储或发送所述录屏视频文件。
  8. 一种虚拟现实设备,其特征在于,包括:
    显示器;
    姿态传感器,被配置为实时检测用户姿态数据;
    控制器,被配置为:
    接收用户输入的用于开始录屏的控制指令;
    响应于所述控制指令,通过所述姿态传感器获取用户姿态数据;
    根据所述用户姿态数据计算姿态变化量,所述姿态变化量为用户姿态数据与前一帧录屏图像时的姿态数据的差值;
    根据所述姿态变化量从渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
  9. 根据权利要求8所述的虚拟现实设备,其特征在于,根据所述姿态变化量从渲染场景中拍摄录屏图像的步骤中,所述控制器被进一步配置为:
    对比所述姿态变化量与预设抖动阈值;
    如果所述姿态变化量小于或等于预设抖动阈值,按照输出前一帧录屏图像时的姿态数据从渲染场景中拍摄录屏图像,以输出稳定拍摄角度的录屏图像画面;
    如果所述姿态变化量大于预设抖动阈值,按照获取的用户姿态数据从渲染场景中拍摄录屏图像。
  10. 一种防抖动录屏方法,其特征在于,应用于虚拟现实设备,所述虚拟现实设备包括显示器、姿态传感器以及控制器,所述防抖动录屏方法包括:
    接收用户输入的用于开始录屏的控制指令;
    响应于所述控制指令,对所述用户姿态数据执行平滑处理;
    根据平滑处理后的用户姿态数据,在渲染场景中拍摄录屏图像,以在姿态变化量小于预设抖动阈值时,输出稳定拍摄角度的录屏图像画面。
PCT/CN2021/135509 2021-01-18 2021-12-03 虚拟现实设备 WO2022151864A1 (zh)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
CN202110065120.XA CN112732089A (zh) 2021-01-18 2021-01-18 一种虚拟现实设备及快捷交互方法
CN202110065120.X 2021-01-18
CN202110065015.6 2021-01-18
CN202110065015 2021-01-18
CN202110280846.5A CN114302214B (zh) 2021-01-18 2021-03-16 一种虚拟现实设备及防抖动录屏方法
CN202110280846.5 2021-03-16
CN202110292608.6 2021-03-18
CN202110292608.6A CN114327034A (zh) 2021-01-18 2021-03-18 一种显示设备及录屏交互方法
CN202110980427.2A CN113655887A (zh) 2021-01-18 2021-08-25 一种虚拟现实设备及静态录屏方法
CN202110980427.2 2021-08-25

Publications (1)

Publication Number Publication Date
WO2022151864A1 true WO2022151864A1 (zh) 2022-07-21

Family

ID=82446827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135509 WO2022151864A1 (zh) 2021-01-18 2021-12-03 虚拟现实设备

Country Status (1)

Country Link
WO (1) WO2022151864A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051487A1 (zh) * 2022-09-05 2024-03-14 腾讯科技(深圳)有限公司 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260251A1 (en) * 2015-03-06 2016-09-08 Sony Computer Entertainment Inc. Tracking System for Head Mounted Display
CN106020482A (zh) * 2016-05-30 2016-10-12 努比亚技术有限公司 一种控制方法、虚拟现实设备和移动终端
CN107204044A (zh) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 一种基于虚拟现实的画面显示方法及相关设备
CN107678539A (zh) * 2017-09-07 2018-02-09 歌尔科技有限公司 用于头戴显示设备的显示方法及头戴显示设备
CN110505471A (zh) * 2019-07-29 2019-11-26 青岛小鸟看看科技有限公司 一种头戴显示设备及其屏幕采集方法、装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260251A1 (en) * 2015-03-06 2016-09-08 Sony Computer Entertainment Inc. Tracking System for Head Mounted Display
CN107204044A (zh) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 一种基于虚拟现实的画面显示方法及相关设备
CN106020482A (zh) * 2016-05-30 2016-10-12 努比亚技术有限公司 一种控制方法、虚拟现实设备和移动终端
CN107678539A (zh) * 2017-09-07 2018-02-09 歌尔科技有限公司 用于头戴显示设备的显示方法及头戴显示设备
CN110505471A (zh) * 2019-07-29 2019-11-26 青岛小鸟看看科技有限公司 一种头戴显示设备及其屏幕采集方法、装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051487A1 (zh) * 2022-09-05 2024-03-14 腾讯科技(深圳)有限公司 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品

Similar Documents

Publication Publication Date Title
CN114302214B (zh) 一种虚拟现实设备及防抖动录屏方法
CA2942377C (en) Object tracking in zoomed video
EP3902278B1 (en) Music playing method, device, terminal and storage medium
CN111726536A (zh) 视频生成方法、装置、存储介质及计算机设备
CN110546601B (zh) 信息处理装置、信息处理方法和程序
JP2018530950A (ja) いつでもどこからでもビデオコンテンツを再生するための方法および装置
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
CN112866773B (zh) 一种显示设备及多人场景下摄像头追踪方法
CN112261481B (zh) 互动视频的创建方法、装置、设备及可读存储介质
US20150213784A1 (en) Motion-based lenticular image display
US20150215526A1 (en) Lenticular image capture
WO2021073293A1 (zh) 动画文件的生成方法、装置及存储介质
CN112732089A (zh) 一种虚拟现实设备及快捷交互方法
CN112862859A (zh) 一种人脸特征值创建方法、人物锁定追踪方法及显示设备
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
WO2020248697A1 (zh) 显示设备及视频通讯数据处理方法
WO2022151864A1 (zh) 虚拟现实设备
WO2022193931A1 (zh) 一种虚拟现实设备及媒资播放方法
CN116170624A (zh) 一种对象展示方法、装置、电子设备及存储介质
CN112817557A (zh) 一种基于多人手势识别的音量调节方法及显示设备
CN106341716A (zh) 一种智能戒指控制视频播放的方法及装置
CN114780010A (zh) 一种显示设备及其控制方法
CN114520874B (zh) 视频处理方法、装置及电子设备
CN112732088B (zh) 一种虚拟现实设备及单目截屏方法
WO2023231616A1 (zh) 一种拍摄方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21919055

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21919055

Country of ref document: EP

Kind code of ref document: A1