WO2020042494A1 - Method for screenshot of vr scene, device and storage medium - Google Patents

Method for screenshot of vr scene, device and storage medium Download PDF

Info

Publication number
WO2020042494A1
WO2020042494A1 PCT/CN2018/123764 CN2018123764W WO2020042494A1 WO 2020042494 A1 WO2020042494 A1 WO 2020042494A1 CN 2018123764 W CN2018123764 W CN 2018123764W WO 2020042494 A1 WO2020042494 A1 WO 2020042494A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual scene
view
scene
screenshot
Prior art date
Application number
PCT/CN2018/123764
Other languages
French (fr)
Chinese (zh)
Inventor
张向军
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Publication of WO2020042494A1 publication Critical patent/WO2020042494A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present invention relates to the technical field of virtual reality (VR), and in particular, to a method, a device, and a storage medium for capturing a VR scene.
  • VR virtual reality
  • Screenshot is a way to capture the displayed content.
  • the user can save the content displayed on the screen as an image file through the screenshot operation.
  • a smartphone can save the content displayed on the screen through the screenshot operation.
  • a two-dimensional image display device performs a screenshot operation
  • the content currently displayed on the screen is directly taken as a target screenshot.
  • a three-dimensional image display device such as a VR device
  • the screenshot obtained by directly intercepting the content currently displayed on the screen of the VR device is the same as that of the VR device
  • the virtual scenes shown are generally different.
  • Some embodiments of the present invention provide a VR scene screenshot method, device and storage medium, which are used to capture the 3D virtual scene actually displayed by the VR device according to the screenshot instruction during the VR device display of the VR scene.
  • Some embodiments of the present invention provide a VR scene screenshot method, including: in response to a screenshot instruction, selecting at least two reference fields of view in a three-dimensional virtual scene that can be displayed by a VR device, and the edge of the field of view of two adjacent reference fields of view Superimpose; obtain virtual scene segments corresponding to the at least two reference fields of view; perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
  • selecting at least two reference fields of view in the three-dimensional virtual scene displayed by the VR device includes: uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene; and viewing the at least two different user viewpoints.
  • the field FOV is set to obtain at least two reference fields of view, where at least one field of view satisfies FOV> 360 ° / N, where N is the number of user viewpoints included in the at least two user viewpoints.
  • uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene includes: obtaining a user's current head position pose data; and determining the head position pose data separately in the three-dimensional virtual scene. Determining the left and / or right eye viewpoints of the user; determining the base user viewpoint based on the left and / or right eye viewpoints of the user; selecting at least uniformly distributed with the base user viewpoint within the three-dimensional virtual scene One viewpoint serves as a secondary user viewpoint.
  • the number of viewpoints of the auxiliary user is two.
  • obtaining virtual scene fragments corresponding to the at least two reference fields of view includes: sequentially performing scene rendering at each of the user viewpoints of the at least two different user viewpoints to render the at least two Virtual scene clips corresponding to each reference field of view.
  • image stitching the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes: performing edge similarity detection on the virtual scene segments corresponding to the at least two reference fields of view. ; According to the detection result of the edge similarity point detection, the virtual scene segments corresponding to the at least two reference fields of view whose edge similarity is greater than a set threshold are spliced.
  • performing image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes determining the basis from the virtual scene segments corresponding to the at least two reference fields of view.
  • the virtual scene segment corresponding to the user viewpoint is the stitching center; according to the positional relationship between the auxiliary user viewpoint and the basic user viewpoint, determining the relative position of the virtual scene fragment corresponding to the auxiliary user viewpoint and the stitching center;
  • the stitching center and the relative position perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a screenshot of the VR scene.
  • the method further includes: storing the VR scene screenshot to a designated path of the VR device, and notifying a user that a screen capture operation has been completed and / or displaying the designated path of the VR scene screenshot.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present invention.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present invention.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present invention.
  • Still other embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed can implement the VR scene screenshot method provided by the present invention.
  • At least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, at least two Virtual scene clips corresponding to each reference field of view, and image stitching of at least two virtual scene clips corresponding to each of the reference field of view, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, effectively capturing Real display content of VR equipment.
  • FIG. 1a is a schematic diagram of a screenshot process of a two-dimensional display device
  • FIG. 1b is a schematic diagram of a three-dimensional display principle of a VR device
  • FIG. 1c is a schematic diagram of a screen display content of a VR device
  • FIG. 1d is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention.
  • FIG. 2a is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention.
  • FIG. 2b is a schematic diagram of correspondence between different user viewpoints and VR camera viewpoints provided by some embodiments of the present invention.
  • 2c is a schematic diagram of splicing a virtual scene segment provided by some embodiments of the invention.
  • FIG. 3 is a schematic structural diagram of a VR device according to some embodiments of the present invention.
  • FIG. 4 is a schematic structural diagram of a VR device according to some embodiments of the present invention.
  • the screenshot operation of the two-dimensional image display device can be shown in FIG. 1a.
  • the content displayed on the screen of the two-dimensional image display device is the real display content of the two-dimensional image display device at a certain moment.
  • the two-dimensional image display device receives a screenshot instruction, it can directly capture the real content displayed on the current screen and save it as a picture (for example, a jpg file).
  • the user wearing the VR device is in a 3D virtual space at any time, and the image that the user can see in the 3D space should be in the entire 3D virtual space. image.
  • the content displayed on the screen of the VR device is only a part of the three-dimensional space that the VR device can display at a certain moment, and is usually subjected to distortion processing, as shown in FIG. 1c. Therefore, when a VR device receives a screenshot instruction, simply capturing the content being displayed on the screen cannot meet the actual screenshot requirements.
  • the present invention provides a solution, which will be described in detail below.
  • FIG. 1d is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention. As shown in FIG. 1d, the method includes:
  • Step 101 In response to a screenshot instruction, at least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed.
  • Step 102 Obtain a virtual scene segment corresponding to each of the at least two reference fields of view.
  • Step 103 Perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
  • the screenshot instruction may be initiated by a user, or may be initiated by a VR device.
  • the user can send a screenshot instruction through a specific physical button on the VR device, or send a screenshot instruction through a set hand or head movement, or send a screenshot instruction through a voice wake-up.
  • This implementation Examples do not limit this.
  • the screenshot instruction may be sent by an application currently running on the VR device, or may be sent by the VR device according to a set screenshot period, depending on the application scenario.
  • At least two reference fields of view can be selected within the three-dimensional virtual scene that the VR device can display.
  • the reference field of view is used to simulate the field of view of the user in the three-dimensional virtual scene.
  • at least two reference fields of view may be selected.
  • the at least two reference fields of view can simulate at least two fields of view of the user in the three-dimensional virtual scene.
  • the edges of the fields of view of two adjacent reference fields of view are superimposed, and the field of view ranges of the at least two reference fields of view can cover the panorama of the three-dimensional virtual scene.
  • the screenshot shows the 3D scene that the VR device is showing when the screenshot instruction is received.
  • At least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, each of the at least two reference fields of view is obtained correspondingly Virtual scene clips, and image stitching of at least two virtual scene clips corresponding to the reference field of view, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, effectively capturing the real display content of the VR device .
  • FIG. 2a is a schematic flowchart of a VR scene screenshot method according to another embodiment of the present invention. As shown in FIG. 2, the method includes:
  • Step 201 In response to a screenshot instruction, at least two different user viewpoints are uniformly selected in a three-dimensional virtual scene that can be displayed by the VR device.
  • Step 202 Set the field of view angles FOV of the at least two different user viewpoints to obtain at least two reference fields of view, where at least one field of view satisfies FOV> 360 ° / N, where N is the at least two The number of user viewpoints contained in each user viewpoint.
  • Step 203 Perform scene rendering in sequence at each of the user viewpoints of the at least two different user viewpoints to render and obtain virtual scene fragments corresponding to the at least two reference viewpoints.
  • Step 204 Perform edge similarity detection on the virtual scene segments corresponding to the at least two reference fields of view.
  • Step 205 According to the detection result of the edge similarity detection, among the virtual scene fragments corresponding to the at least two reference fields of view, the virtual scene fragments whose edge similarity is greater than a set threshold are stitched.
  • Step 206 Store the VR scene screenshot to a designated path of the VR device, and notify the user that the screen capture operation has been completed and / or the designated path of the VR scene screenshot is displayed.
  • the user viewpoint refers to a visual base point in the three-dimensional virtual scene when the user views the three-dimensional virtual scene, and the visual base point generally includes viewing position information and viewing direction information, as shown in viewpoint A and viewpoint shown in FIG. 2b. B and viewpoint C.
  • a VR device can display a virtual scene that matches the user's viewpoint.
  • the VR scene is developed by development tools such as Unity 3D. These development tools can create a three-dimensional virtual space, design a three-dimensional virtual scene in the three-dimensional virtual space, and design a VR virtual camera to simulate the user's Eyes, the viewpoint of the VR virtual camera is the user's viewpoint.
  • the viewpoints A, B, and C of the user can be simulated with the viewpoints A ', B', and C 'of the VR virtual camera, respectively.
  • selecting different user viewpoints in a three-dimensional virtual scene can simulate different viewing positions and viewing directions of the user, and then based on the different viewing positions and viewing directions, obtain a three-dimensional scene being displayed by the VR device when a screenshot instruction is received.
  • uniformly selecting different user viewpoints is beneficial to quickly calculate the size of the field of view corresponding to each viewpoint.
  • a method for uniformly selecting at least two different user viewpoints in a three-dimensional virtual scene includes:
  • the head position and posture data of the user can be obtained according to the inertial detection unit, multi-axis acceleration sensor, gyroscope and other devices installed on the VR device, and details are not described here. Then, according to the head position pose data, the left and / or right eye viewpoints of the user are respectively determined in the three-dimensional virtual scene, and the base user viewpoint is determined according to the left and / or right eye viewpoints of the user.
  • the user's left-eye viewpoint can be used as the basic user's viewpoint
  • the user's right-eye viewpoint can also be used as the basic user's viewpoint
  • the middle viewpoint of the user's left-eye viewpoint and right-eye viewpoint can also be selected
  • the viewpoint of the basic user is not limited in this embodiment.
  • at least one viewpoint uniformly distributed with the viewpoint of the base user is selected as the auxiliary user viewpoint.
  • the number of auxiliary user viewpoints may be two, and the two auxiliary user viewpoints may be respectively distributed on both sides of the base user viewpoint.
  • one basic user viewpoint and two auxiliary user viewpoints can provide the highest image rendering efficiency and image stitching efficiency on the premise of ensuring that the subsequent virtual scene fragments completely cover the three-dimensional virtual scene.
  • the field of view corresponding to at least one user viewpoint satisfies FOV> 360 ° / N
  • N is the number of user viewpoints
  • the field of view corresponding to each user viewpoint satisfies FOV > 360 ° / N, where N is the number of user viewpoints.
  • the selected at least two different user viewpoints are associated with the user's current head and pose data, which is beneficial to determine the virtual scene clips that the user is currently watching in the subsequent process, and obtain Screenshots of VR scenes more in line with real viewing progress.
  • step 202 may be performed.
  • the field of view of each user viewpoint is greater than N> 360 ° / N.
  • N 3
  • the field of view of each user's viewpoint can be set to 130 ° or 150 °.
  • the edge of the field of view of two adjacent fields of view appears superimposed, which is beneficial for subsequent image stitching.
  • the user's viewpoint when selecting the user's viewpoint, you can also choose arbitrarily without considering the uniformity between viewpoints. For example, it can be randomly selected in a three-dimensional virtual scene that can be displayed by a VR device. Select multiple user viewpoints. In this manner, the size of the field of view corresponding to each user's viewpoint can be calculated according to the positional relationship between the user's viewpoint and other user's viewpoints, and the sum of the angles of view corresponding to multiple user viewpoints is greater than 360 °.
  • the user may be set The viewpoints A and B have smaller field angles, and the user's viewpoint C has a larger field angle. For example, set the field angle corresponding to point A to 90 ° and set the field angle corresponding to point B to 120 °.
  • the field of view corresponding to C is 160 °.
  • scene rendering may be performed at each of the user points of view of the at least two different user viewpoints in order to render the at least two reference fields of view. Corresponding virtual scene clip.
  • the user's viewpoint can be used as the viewpoint of the VR virtual camera, and the VR virtual camera can be adjusted to each different user's viewpoint in turn, and A 3D virtual scene rendering program is run at each different user viewpoint to render a virtual scene fragment corresponding to the field of view of the user viewpoint.
  • step 204 in some embodiments, after acquiring virtual scene segments corresponding to at least two reference fields of view, edge similarity detection is performed on the virtual scene segments.
  • a picture correlation coefficient method may be used to find virtual scene fragments with similar edges.
  • the edges of each virtual scene segment can be identified separately, and the edge correlation coefficients of two adjacent virtual scene segments are calculated based on the recognition results, and virtual scene segments with similar edges are determined based on the correlation coefficients.
  • other picture edge similarity algorithms can also be used to identify virtual scene fragments with similar edges, such as Euclidean distance method, perceptual hashing method, and sliding window-based template matching method. This embodiment includes but Not only that.
  • step 205 in some embodiments, in this step, among the virtual scene fragments corresponding to the at least two reference fields of view, the virtual scene fragments whose edge similarity is greater than a set threshold may be stitched.
  • the virtual scene segments in the reference field of view corresponding to the user viewpoints A, B, and C can be stitched, where the overlapping area with the virtual scene segment is a region with similar edges.
  • the method of horizontal splicing is illustrated in FIG. 2c. In practice, the method of vertical splicing or other degrees of splicing may also be included.
  • the virtual scene segment corresponding to the user's viewpoint can be used as a center to make the VR scene screenshot obtained by the stitching center on the virtual scene segment that the user is currently watching, which is more in line with the user's actual viewing effect.
  • the virtual scene segment corresponding to the basic user viewpoint may be first determined as the splicing center from the virtual scene segments corresponding to the at least two reference fields of view; then, according to the auxiliary user viewpoint and the basic user viewpoint, The positional relationship between them determines the relative position of the virtual scene segment corresponding to the auxiliary user's viewpoint and the stitching center; then, according to the stitching center and the relative position, image stitching is performed on the virtual scene segments corresponding to at least two reference fields of view to generate VR Scene screenshot.
  • the virtual scene fragment in the field of view corresponding to viewpoint B can be used as the center of the VR screenshot.
  • the virtual scene segments in the field of view corresponding to viewpoint A and viewpoint C are spliced, respectively.
  • the spliced VR scene screenshot may also be stored in a specified path of the VR device, and the user is notified that the screen capture operation has been completed and / or the designation of the VR scene screenshot is displayed. path.
  • the method for notifying the user that the screen capture operation has been completed may be a voice mode or a text mode, which is not limited in this embodiment.
  • At least two reference fields of view are uniformly selected in the three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, at least two reference fields of view are obtained to correspond to each other Virtual scene clips, and image stitching of at least two reference scenes corresponding to the virtual scene clips according to the edge similarity of the virtual scene clips, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, which is effective Captured the real display content of VR devices.
  • the execution subject of each step of the method provided in the foregoing embodiment may be the same device, or the method may also use different devices as execution subjects.
  • the execution subject of steps 201 to 204 may be device A; for example, the execution subject of steps 201 and 202 may be device A, and the execution subject of step 203 may be device B; and so on.
  • the above embodiment describes a part of the implementation method of the VR scene screenshot method provided by some embodiments of the present invention.
  • This method may be implemented by the VR device shown in FIG. 3.
  • the VR device includes: a memory 301, a processor 302, and an input.
  • the memory 301, the processor 302, the input device 303, and the output device 304 may be connected by using a bus or other methods.
  • the bus connection is taken as an example.
  • the memory 301 may be directly coupled to the processor 302, and the input device 303 and the output device 304 may be directly or indirectly connected to the processor 302 through a data line and a data interface.
  • the above connection mode is only used for illustrative description, and does not constitute any limitation on the protection scope of the embodiment of the present invention.
  • the memory 301 is used to store one or more computer instructions, and may be configured to store various other data to support operations on the VR device. Examples of such data include instructions for any application or method for operating on a VR device.
  • the memory 301 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Programming read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Programming read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory magnetic memory
  • flash memory magnetic disk or optical disk.
  • the memory 301 may optionally include a memory remotely set relative to the processor 302, and these remote memories may be connected to the AR display device through a network.
  • Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • a processor 302 coupled to the memory 301 and configured to execute the one or more computer instructions for: in response to a screenshot instruction, selecting at least two reference fields of view within a three-dimensional virtual scene that can be displayed by the VR device, and adjacent to each other The edges of the two reference fields of view are superimposed; the virtual scene segments corresponding to the at least two reference fields of view are obtained; image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot .
  • the processor 302 is specifically configured to: uniformly select at least two different user viewpoints in the three-dimensional virtual scene;
  • the FOVs of two different user viewpoints are set to obtain at least two reference FOVs, where at least one FOV satisfies FOV> 360 ° / N, where N is the user viewpoint contained in the at least two user viewpoints Quantity.
  • the processor 302 is specifically configured to: obtain current head position pose data of the user; Determining the left and / or right eye viewpoints of the user in the three-dimensional virtual scene; determining the base user viewpoints according to the left and / or right eye viewpoints of the user; At least one viewpoint with evenly distributed base user viewpoints is used as the auxiliary user viewpoint.
  • the number of the auxiliary user viewpoints is two.
  • the processor 302 when acquiring virtual scene fragments corresponding to the at least two reference fields of view, is specifically configured to perform scene rendering at each user viewpoint in the at least two different user viewpoints in sequence, A virtual scene segment corresponding to the at least two reference fields of view is obtained by rendering.
  • the processor 302 is specifically configured to: The scene segments are detected for edge similarity. According to the detection result of the edge similarity detection, virtual scene segments whose edge similarity is greater than a set threshold among the virtual scene segments corresponding to the at least two reference fields of view are stitched.
  • the processor 302 is specifically configured to: from the virtual scenes corresponding to the at least two reference fields of view Among the fragments, it is determined that the virtual scene fragment corresponding to the basic user viewpoint is a splicing center; and according to a positional relationship between the auxiliary user viewpoint and the basic user viewpoint, determining the virtual scene fragment corresponding to the auxiliary user viewpoint and the Relative position of the stitching center; according to the stitching center and the relative position, image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a screenshot of the VR scene.
  • the processor 302 is further configured to: store the VR scene screenshot to a specified path of the VR device, and notify the user that the screen capture operation has been completed and / or display the designated path of the VR scene screenshot .
  • the input device 303 can receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the VR device.
  • the output device 304 may include a display device such as a display screen.
  • the VR device further includes: a power supply component 305.
  • the power source component 305 provides power to various components of the device where the power source component is located.
  • Power components can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the equipment in which the power components are located.
  • the above VR device can execute the VR scene screenshot method provided by the embodiment of the present application, and has corresponding function modules and beneficial effects of executing the method.
  • the above VR device can execute the VR scene screenshot method provided by the embodiment of the present application, and has corresponding function modules and beneficial effects of executing the method.
  • the present invention also provides a computer-readable storage medium storing a computer program, and the computer program, when executed, can implement the steps in the method that the VR device can execute.
  • the VR device provided by some embodiments of the present invention may be an external head-mounted display device or an integrated head-mounted display device, wherein the external head-mounted display device may be used in conjunction with an external processing system (such as a computer processing system).
  • an external processing system such as a computer processing system
  • FIG. 4 shows a schematic diagram of the internal configuration of the VR device 400 in some embodiments.
  • the display unit 401 may include a display panel.
  • the display panel is disposed on a side surface of the VR device 400 that faces the user's face, and may be a whole panel, or a left panel and a right panel corresponding to the left and right eyes of the user, respectively.
  • the display panel may be an electroluminescence (EL) element, a liquid crystal display, or a micro-display with a similar structure, or a laser scanning display with a direct retina display or the like.
  • EL electroluminescence
  • the virtual image optical unit 402 displays the image displayed by the display unit 401 to the user in an enlarged manner, and allows the user to observe the displayed image in accordance with the enlarged virtual image.
  • a display image output to the display unit 401 it may be an image of a virtual scene provided from a content reproduction device (Blu-ray disc or DVD player) or a streaming server, or an image of a real scene captured using an external camera 410.
  • the virtual image optical unit 402 may include a lens unit, such as a spherical lens, an aspherical lens, a Fresnel lens, and the like.
  • the input operation unit 403 includes at least one operation component for performing an input operation, such as a key, a button, a switch, or other components having similar functions, receives a user instruction through the operation component, and outputs the instruction to the control unit 407.
  • an input operation such as a key, a button, a switch, or other components having similar functions
  • the status information acquiring unit 404 is configured to acquire status information of a user wearing the VR device 400.
  • the status information obtaining unit 404 may include various types of sensors for detecting status information by itself, and may obtain status information from external devices (such as a smartphone, a wristwatch, and other multifunctional terminals worn by the user) through the communication unit 405.
  • the state information acquisition unit 404 may acquire position information and / or posture information of a user's head.
  • the status information acquisition unit 404 may include one or more of a gyroscope sensor, an acceleration sensor, a global positioning system (GPS) sensor, a geomagnetic sensor, a Doppler effect sensor, an infrared sensor, and a radio frequency field strength sensor.
  • GPS global positioning system
  • the status information acquisition unit 404 acquires status information of a user wearing the VR device 400, such as acquiring, for example, an operation status of the user (whether the user is wearing the VR device 400), an action status of the user (such as stationary, walking, running, and the like) State, hand or fingertip posture, eye open or closed state, line of sight, pupil size), mental state (whether the user is immersed in observing the displayed image and the like), and even a physiological state.
  • an operation status of the user whether the user is wearing the VR device 400
  • an action status of the user such as stationary, walking, running, and the like
  • State hand or fingertip posture, eye open or closed state, line of sight, pupil size
  • mental state whether the user is immersed in observing the displayed image and the like
  • the communication unit 405 performs a communication process with an external device, a modulation and demodulation process, and an encoding and decoding process of a communication signal.
  • the control unit 407 may transmit transmission data from the communication unit 405 to an external device.
  • the communication method can be wired or wireless, such as mobile high-definition link (MHL) or universal serial bus (USB), high-definition multimedia interface (HDMI), wireless fidelity (Wi-Fi), Bluetooth communication or Bluetooth low energy communication, And the IEEE802.11s standard mesh network.
  • the communication unit 405 may be a cellular wireless transceiver operating according to Wideband Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and similar standards.
  • W-CDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • the VR device 400 may further include a storage unit, and the storage unit 406 is a mass storage device configured with a solid state drive (SSD) or the like.
  • the storage unit 406 may store an application program or various types of data. For example, content viewed by a user using the VR device 400 may be stored in the storage unit 406.
  • the VR device 400 may further include a control unit and a storage unit (for example, the ROM 407A and the RAM 407B shown in the figure).
  • the control unit 407 may include a computer processing unit (CPU) or other devices with similar functions.
  • the control unit 407 may be used to execute an application program stored in the storage unit 406, or the control unit 407 may also be used to execute the methods, functions, and circuits disclosed in some embodiments of the present invention.
  • the image processing unit 408 is used to perform signal processing, such as image quality correction related to the image signal output from the control unit 407, and to convert its resolution to a resolution according to the screen of the display unit 401. Then, the display driving unit 409 sequentially selects each row of pixels of the display unit 401 and sequentially scans each row of pixels of the display unit 401 one by one, thereby providing a pixel signal based on the image signal subjected to signal processing.
  • the VR device 400 may further include an external camera.
  • the external cameras 410 may be disposed on the front surface of the main body of the VR device 400, and the external cameras 410 may be one or more.
  • the external camera 410 can acquire three-dimensional information and can also be used as a distance sensor.
  • a position sensitive detector (PSD) or other type of distance sensor that detects a reflected signal from an object may be used with the external camera 410.
  • PSD position sensitive detector
  • the external camera 410 and the distance sensor may be used to detect a body position, posture, and shape of a user wearing the VR device 400.
  • the user can directly view or preview the real scene through the external camera 410.
  • the VR device 400 may further include a sound processing unit, and the sound processing unit 411 may perform sound quality correction or sound amplification of a sound signal output from the control unit 407, signal processing of an input sound signal, and the like. Then, the sound input / output unit 412 outputs sound to the outside after sound processing and inputs sound from a microphone.
  • the sound processing unit 411 may perform sound quality correction or sound amplification of a sound signal output from the control unit 407, signal processing of an input sound signal, and the like. Then, the sound input / output unit 412 outputs sound to the outside after sound processing and inputs sound from a microphone.
  • the structure or component shown by the dashed box in FIG. 4 may be independent of the VR device 400, for example, it may be provided in an external processing system (such as a computer system) for use with the VR device 400; or The resulting structure or component may be disposed inside or on the surface of the VR device 400.
  • RAM random access memory
  • ROM read-only memory
  • electrically programmable ROM electrically erasable programmable ROM
  • registers hard disks, removable disks, CD-ROMs, or in technical fields Any other form of storage medium known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Some embodiments of the present application disclose a method for screenshot of a scene, a device and a storage medium. In the present embodiment, at least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by a VR device, and edges of two adjacent reference fields of view are superimposed; next, virtual scene segments corresponding to the at least two reference fields of view are acquired, and images of the virtual scene segments corresponding to the at least two reference fields of view are spliced; and then the generated VR scene screenshots match the three-dimensional virtual scene actually displayed by the VR device, effectively capturing the real display content of the VR device.

Description

VR场景截图方法、设备及存储介质VR scene screenshot method, device and storage medium
本申请要求于2018年8月31日提交中国专利局、申请号为201811013485.2、发明名称为“VR场景截图方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority from a Chinese patent application filed on August 31, 2018 with the Chinese Patent Office, application number 201811013485.2, and the invention name is "VR scene screenshot method, device, and storage medium", the entire contents of which are incorporated herein by reference. Applying.
技术领域Technical field
本发明涉及VR(Virtual Reality,虚拟现实)技术领域,尤其涉及一种VR场景截图方法、设备及存储介质。The present invention relates to the technical field of virtual reality (VR), and in particular, to a method, a device, and a storage medium for capturing a VR scene.
背景技术Background technique
截图是截取显示内容的一种途径,用户可通过截图操作将屏幕上展示的内容以图像文件的形式保存下来,例如,智能手机可通过截图操作保存屏幕上展示的内容。Screenshot is a way to capture the displayed content. The user can save the content displayed on the screen as an image file through the screenshot operation. For example, a smartphone can save the content displayed on the screen through the screenshot operation.
目前,二维图像显示设备执行截图操作时,通常直接截取屏幕上当前显示的内容作为目标截图。但是,三维图像显示设备,例如VR设备执行截图操作时,由于VR设备的屏幕显示的图像是经过处理后的图像,进而直接截取VR设备的屏幕上当前显示的内容得到的截图,与VR设备实际展示的虚拟场景一般存在差异。At present, when a two-dimensional image display device performs a screenshot operation, usually the content currently displayed on the screen is directly taken as a target screenshot. However, when a three-dimensional image display device, such as a VR device, performs a screenshot operation, since the image displayed on the screen of the VR device is a processed image, the screenshot obtained by directly intercepting the content currently displayed on the screen of the VR device is the same as that of the VR device The virtual scenes shown are generally different.
发明内容Summary of the Invention
本发明一些实施例提供一种VR场景截图方法、设备及存储介质,用以在VR设备展示VR场景的过程中,根据截图指令,截取到VR设备实际展示的三维虚拟场景。Some embodiments of the present invention provide a VR scene screenshot method, device and storage medium, which are used to capture the 3D virtual scene actually displayed by the VR device according to the screenshot instruction during the VR device display of the VR scene.
本发明一些实施例提供一种VR场景截图方法,包括:响应于截图指令,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加;获取所述至少两个参考视场各自对应的虚拟场景片段;对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图。Some embodiments of the present invention provide a VR scene screenshot method, including: in response to a screenshot instruction, selecting at least two reference fields of view in a three-dimensional virtual scene that can be displayed by a VR device, and the edge of the field of view of two adjacent reference fields of view Superimpose; obtain virtual scene segments corresponding to the at least two reference fields of view; perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
可选地,在VR设备展示的三维虚拟场景内选择至少两个参考视场,包括: 在所述三维虚拟场景内均匀选取至少两个不同用户视点;对所述至少两个不同用户视点的视场FOV进行设置,以得到至少两个参考视场,其中,至少一个视场角满足FOV>360°/N,N为所述至少两个用户视点包含的用户视点数量。Optionally, selecting at least two reference fields of view in the three-dimensional virtual scene displayed by the VR device includes: uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene; and viewing the at least two different user viewpoints. The field FOV is set to obtain at least two reference fields of view, where at least one field of view satisfies FOV> 360 ° / N, where N is the number of user viewpoints included in the at least two user viewpoints.
可选地,在所述三维虚拟场景内均匀选取至少两个不同用户视点,包括:获取用户当前的头部位姿数据;根据所述头部位姿数据,在所述三维虚拟场景内分别确定所述用户的左眼和/或右眼视点;根据所述用户的左眼和/或右眼视点,确定基础用户视点;在所述三维虚拟场景内选择与所述基础用户视点均匀分布的至少一个视点作为辅助用户视点。Optionally, uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene includes: obtaining a user's current head position pose data; and determining the head position pose data separately in the three-dimensional virtual scene. Determining the left and / or right eye viewpoints of the user; determining the base user viewpoint based on the left and / or right eye viewpoints of the user; selecting at least uniformly distributed with the base user viewpoint within the three-dimensional virtual scene One viewpoint serves as a secondary user viewpoint.
可选的,所述辅助用户视点的数量为两个。Optionally, the number of viewpoints of the auxiliary user is two.
可选地,获取所述至少两个参考视场各自对应的虚拟场景片段,包括:依次在所述至少两个不同用户视点中的每个用户视点处进行场景渲染,以渲染得到所述至少两个参考视场对应的虚拟场景片段。Optionally, obtaining virtual scene fragments corresponding to the at least two reference fields of view includes: sequentially performing scene rendering at each of the user viewpoints of the at least two different user viewpoints to render the at least two Virtual scene clips corresponding to each reference field of view.
可选地,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图,包括:对所述至少两个参考视场各自对应的虚拟场景片段进行边缘相似度检测;根据所述边缘相似点检测的检测结果,将所述至少两个参考视场对应的虚拟场景片段中,边缘相似度大于设定阈值的虚拟场景片段进行拼接。Optionally, image stitching the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes: performing edge similarity detection on the virtual scene segments corresponding to the at least two reference fields of view. ; According to the detection result of the edge similarity point detection, the virtual scene segments corresponding to the at least two reference fields of view whose edge similarity is greater than a set threshold are spliced.
可选地,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图,包括:从所述至少两个参考视场对应的虚拟场景片段中,确定所述基础用户视点对应的虚拟场景片段为拼接中心;根据所述辅助用户视点与所述基础用户视点之间的位置关系,确定所述辅助用户视点对应的虚拟场景片段与所述拼接中心的相对位置;根据所述拼接中心以及所述相对位置,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成所述VR场景截图。进一步可选地,还包括:将所述VR场景截图存储至所述VR设备的指定路径下,并通知用户已完成截屏操作和/或展示所述VR场景截图的所述指定路径。Optionally, performing image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes determining the basis from the virtual scene segments corresponding to the at least two reference fields of view. The virtual scene segment corresponding to the user viewpoint is the stitching center; according to the positional relationship between the auxiliary user viewpoint and the basic user viewpoint, determining the relative position of the virtual scene fragment corresponding to the auxiliary user viewpoint and the stitching center; The stitching center and the relative position perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a screenshot of the VR scene. Further optionally, the method further includes: storing the VR scene screenshot to a designated path of the VR device, and notifying a user that a screen capture operation has been completed and / or displaying the designated path of the VR scene screenshot.
本发明另一些实施例还提供一种VR设备,包括:存储器和处理器;其中,所述存储器用于存储一条或多条计算机指令;所述处理器与所述存储器耦合,用于执行所述一条或多条计算机指令以用于执行本发明提供的VR场景截图 方法。Other embodiments of the present invention further provide a VR device, including: a memory and a processor; wherein the memory is used to store one or more computer instructions; the processor is coupled to the memory and used to execute the memory; One or more computer instructions for performing a VR scene screenshot method provided by the present invention.
本发明再一些实施例还提供一种存储有计算机程序的计算机可读存储介质,所述计算机程序被执行时能够实现本发明提供的VR场景截图方法。Still other embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed can implement the VR scene screenshot method provided by the present invention.
本发明一些实施例提供的VR场景截图方法中,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加;接着,获取至少两个参考视场各自对应的虚拟场景片段,并对至少两个参考视场各自对应的虚拟场景片段进行图像拼接,进而,生成的VR场景截图与VR设备实际展示的三维虚拟场景匹配,有效捕捉了VR设备的真实展示内容。In the VR scene screenshot method provided by some embodiments of the present invention, at least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, at least two Virtual scene clips corresponding to each reference field of view, and image stitching of at least two virtual scene clips corresponding to each of the reference field of view, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, effectively capturing Real display content of VR equipment.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一部分附图,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present application or the prior art more clearly, the drawings used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are merely It is a part of the drawings of the present application. For those of ordinary skill in the art, other drawings can be obtained according to the provided drawings without paying creative labor.
图1a为二维显示设备的截图过程的一示意图;FIG. 1a is a schematic diagram of a screenshot process of a two-dimensional display device; FIG.
图1b是VR设备的三维显示原理的一示意图;FIG. 1b is a schematic diagram of a three-dimensional display principle of a VR device; FIG.
图1c是VR设备的屏幕展示内容的一示意图;FIG. 1c is a schematic diagram of a screen display content of a VR device; FIG.
图1d是本发明一些实施例提供的VR场景截图方法的流程示意图;FIG. 1d is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention; FIG.
图2a是本发明一些实施例提供的VR场景截图方法的流程示意图;2a is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention;
图2b是本发明一些实施例提供的不同用户视点与VR摄像头视点的对应示意图;FIG. 2b is a schematic diagram of correspondence between different user viewpoints and VR camera viewpoints provided by some embodiments of the present invention; FIG.
图2c是发明一些实施例提供的虚拟场景片段拼接示意图;2c is a schematic diagram of splicing a virtual scene segment provided by some embodiments of the invention;
图3是本发明一些实施例提供的VR设备的结构示意图;3 is a schematic structural diagram of a VR device according to some embodiments of the present invention;
图4是本发明一些实施例提供的VR设备的结构示意图。FIG. 4 is a schematic structural diagram of a VR device according to some embodiments of the present invention.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
二维图像显示设备(以手机为例)的截图操作可如图1a所示,二维图像显示设备的屏幕上所显示的内容是二维图像显示设备在某一时刻的真实展示内容。当二维图像显示设备接收到截图指令时,可直接截取当前屏幕上所显示的真实内容并以图片(例如jpg文件)保存下来。但是针对VR设备而言,如图1b所示,佩戴VR设备的用户在任何时刻,都是处在一个三维虚拟空间中,用户在三维空间内能够看到的图像应该是整个三维虚拟空间内的图像。不同于二维图像显示设备,VR设备的屏幕上所显示的内容只是VR设备在某一时刻所能展示三维空间的一部分,且通常是经过畸变处理的,如图1c所示。因此,VR设备在接收到截图指令时,单纯地截取屏幕正在显示的内容达不到实际截图需求。针对上述技术问题,本发明提供了一种解决方案,以下将详细说明。The screenshot operation of the two-dimensional image display device (taking a mobile phone as an example) can be shown in FIG. 1a. The content displayed on the screen of the two-dimensional image display device is the real display content of the two-dimensional image display device at a certain moment. When the two-dimensional image display device receives a screenshot instruction, it can directly capture the real content displayed on the current screen and save it as a picture (for example, a jpg file). But for VR devices, as shown in Figure 1b, the user wearing the VR device is in a 3D virtual space at any time, and the image that the user can see in the 3D space should be in the entire 3D virtual space. image. Unlike the two-dimensional image display device, the content displayed on the screen of the VR device is only a part of the three-dimensional space that the VR device can display at a certain moment, and is usually subjected to distortion processing, as shown in FIG. 1c. Therefore, when a VR device receives a screenshot instruction, simply capturing the content being displayed on the screen cannot meet the actual screenshot requirements. In view of the above technical problems, the present invention provides a solution, which will be described in detail below.
图1d是本发明一些实施例提供的VR场景截图方法的流程示意图,如图1d所示,该方法包括:FIG. 1d is a schematic flowchart of a VR scene screenshot method provided by some embodiments of the present invention. As shown in FIG. 1d, the method includes:
步骤101、响应于截图指令,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加。Step 101: In response to a screenshot instruction, at least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed.
步骤102、获取所述至少两个参考视场各自对应的虚拟场景片段。Step 102: Obtain a virtual scene segment corresponding to each of the at least two reference fields of view.
步骤103、对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图。Step 103: Perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
在本实施例中,截图指令,可以是用户发起的,也可以是VR设备发起的。当截图指令由用户发起时,用户可通过VR设备上的特定物理按键发送截图指令,或者通过设定的手部动作或者头部动作发送截图指令,或者通过语音唤醒的方式发送截图指令,本实施例对此不做限制。当截图指令由VR设备发起时,该截图指令可由VR设备当前正在运行的应用发送,也可以由VR设备按照设定截图周期发送,视应用场景而定。In this embodiment, the screenshot instruction may be initiated by a user, or may be initiated by a VR device. When the screenshot instruction is initiated by the user, the user can send a screenshot instruction through a specific physical button on the VR device, or send a screenshot instruction through a set hand or head movement, or send a screenshot instruction through a voice wake-up. This implementation Examples do not limit this. When a screenshot instruction is initiated by a VR device, the screenshot instruction may be sent by an application currently running on the VR device, or may be sent by the VR device according to a set screenshot period, depending on the application scenario.
基于上述描述,为截取VR设备的真实展示内容,响应于截图指令,可以在VR设备可展示的三维虚拟场景内选择至少两个参考视场。其中,参考视场,用于模拟用户在三维虚拟场景内的视野范围。考虑到人眼视场的极限值(不超过180°)以及三维虚拟场景的真实感和沉浸感需求,可以选择至少两个参考视场。该至少两个参考视场,能够模拟用户在三维虚拟场景内的至少两个视场范围。其中,该至少两个参考视场中,相邻两个参考视场的视场边缘 叠加,进而该至少两个参考视场的视场范围能够对三维虚拟场景的全景进行覆盖。Based on the above description, in order to capture the real display content of the VR device, in response to the screenshot instruction, at least two reference fields of view can be selected within the three-dimensional virtual scene that the VR device can display. The reference field of view is used to simulate the field of view of the user in the three-dimensional virtual scene. Considering the limit value of the human eye field of view (not exceeding 180 °) and the realistic and immersive needs of the three-dimensional virtual scene, at least two reference fields of view may be selected. The at least two reference fields of view can simulate at least two fields of view of the user in the three-dimensional virtual scene. Among the at least two reference fields of view, the edges of the fields of view of two adjacent reference fields of view are superimposed, and the field of view ranges of the at least two reference fields of view can cover the panorama of the three-dimensional virtual scene.
接着,在获取至少两个参考视场之后,确定该至少两个参考视场各自对应的虚拟场景片段,并对至少两个参考视场各自对应的虚拟场景片段进行图像拼接,拼接得到的VR场景截图展示了接收到截图指令时VR设备正在展示的三维场景。Next, after obtaining at least two reference fields of view, determine virtual scene segments corresponding to the at least two reference fields of view, and perform image stitching on the virtual scene segments corresponding to the at least two reference fields of view, and stitch the resulting VR scenes. The screenshot shows the 3D scene that the VR device is showing when the screenshot instruction is received.
本申请一些实施例中,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加;接着,获取至少两个参考视场各自对应的虚拟场景片段,并对至少两个参考视场各自对应的虚拟场景片段进行图像拼接,进而,生成的VR场景截图与VR设备实际展示的三维虚拟场景匹配,有效捕捉了VR设备的真实展示内容。In some embodiments of the present application, at least two reference fields of view are selected in a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, each of the at least two reference fields of view is obtained correspondingly Virtual scene clips, and image stitching of at least two virtual scene clips corresponding to the reference field of view, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, effectively capturing the real display content of the VR device .
图2a是本发明另一些实施例提供的VR场景截图方法的流程示意图,如图2所示,该方法包括:FIG. 2a is a schematic flowchart of a VR scene screenshot method according to another embodiment of the present invention. As shown in FIG. 2, the method includes:
步骤201、响应于截图指令,在VR设备可展示的三维虚拟场景内均匀选取至少两个不同用户视点。Step 201: In response to a screenshot instruction, at least two different user viewpoints are uniformly selected in a three-dimensional virtual scene that can be displayed by the VR device.
步骤202、对所述至少两个不同用户视点的视场角FOV进行设置,以得到至少两个参考视场,其中,至少一个视场角满足FOV>360°/N,N为所述至少两个用户视点包含的用户视点数量。Step 202: Set the field of view angles FOV of the at least two different user viewpoints to obtain at least two reference fields of view, where at least one field of view satisfies FOV> 360 ° / N, where N is the at least two The number of user viewpoints contained in each user viewpoint.
步骤203、依次在所述至少两个不同用户视点中的每个用户视点处进行场景渲染,以渲染得到所述至少两个参考视场对应的虚拟场景片段。Step 203: Perform scene rendering in sequence at each of the user viewpoints of the at least two different user viewpoints to render and obtain virtual scene fragments corresponding to the at least two reference viewpoints.
步骤204、对所述至少两个参考视场各自对应的虚拟场景片段进行边缘相似度检测。Step 204: Perform edge similarity detection on the virtual scene segments corresponding to the at least two reference fields of view.
步骤205、根据所述边缘相似点检测的检测结果,将所述至少两个参考视场对应的虚拟场景片段中,边缘相似度大于设定阈值的虚拟场景片段进行拼接。Step 205: According to the detection result of the edge similarity detection, among the virtual scene fragments corresponding to the at least two reference fields of view, the virtual scene fragments whose edge similarity is greater than a set threshold are stitched.
步骤206、将所述VR场景截图存储至所述VR设备的指定路径下,并通知用户已完成截屏操作和/或展示所述VR场景截图的所述指定路径。Step 206: Store the VR scene screenshot to a designated path of the VR device, and notify the user that the screen capture operation has been completed and / or the designated path of the VR scene screenshot is displayed.
在步骤201中,用户视点,指的是用户观看三维虚拟场景时,在三维虚拟场景内的视觉基点,该视觉基点通常包含观看位置信息和观看方向信息,如图2b所示的视点A、视点B以及视点C。在VR技术领域中,随着用户视点 发生变化,VR设备可展示与用户视点匹配的虚拟场景。在一些示例性实施例中,VR场景由Unity 3D等开发工具开发,这些开发工具可创建三维的虚拟空间,在该三维的虚拟空间内设计三维虚拟场景,并设计一个VR虚拟摄像头来模拟用户的眼睛,该VR虚拟摄像头的视点,即为用户视点,如图2b所示,用户的视点A、视点B以及视点C可分别用VR虚拟摄像头的视点A’、B’以及C’模拟。In step 201, the user viewpoint refers to a visual base point in the three-dimensional virtual scene when the user views the three-dimensional virtual scene, and the visual base point generally includes viewing position information and viewing direction information, as shown in viewpoint A and viewpoint shown in FIG. 2b. B and viewpoint C. In the field of VR technology, as a user's viewpoint changes, a VR device can display a virtual scene that matches the user's viewpoint. In some exemplary embodiments, the VR scene is developed by development tools such as Unity 3D. These development tools can create a three-dimensional virtual space, design a three-dimensional virtual scene in the three-dimensional virtual space, and design a VR virtual camera to simulate the user's Eyes, the viewpoint of the VR virtual camera is the user's viewpoint. As shown in FIG. 2b, the viewpoints A, B, and C of the user can be simulated with the viewpoints A ', B', and C 'of the VR virtual camera, respectively.
本实施例中,在三维虚拟场景中选择不同用户视点,可模拟用户的不同观看位置和观看方向,进而基于不同的观看位置和观看方向获取接收到截图指令时VR设备正在展示的三维场景。可选的,均匀选择不同用户视点,有利于快速计算每个视点对应的视场大小。In this embodiment, selecting different user viewpoints in a three-dimensional virtual scene can simulate different viewing positions and viewing directions of the user, and then based on the different viewing positions and viewing directions, obtain a three-dimensional scene being displayed by the VR device when a screenshot instruction is received. Optionally, uniformly selecting different user viewpoints is beneficial to quickly calculate the size of the field of view corresponding to each viewpoint.
在一示例性实施方式中,在三维虚拟场景内均匀选取至少两个不同用户视点的一种方式,包括:In an exemplary embodiment, a method for uniformly selecting at least two different user viewpoints in a three-dimensional virtual scene includes:
获取用户当前的头部位姿数据;其中,用户的头部位姿数据可根据VR设备上安装的惯性检测单元、多轴加速度传感器、陀螺仪等设备获取,此处不赘述。接着,根据该头部位姿数据,在三维虚拟场景内分别确定用户的左眼和/或右眼视点,并根据所述用户的左眼和/或右眼视点,确定基础用户视点。Acquire the current head position and posture data of the user; wherein, the head position and posture data of the user can be obtained according to the inertial detection unit, multi-axis acceleration sensor, gyroscope and other devices installed on the VR device, and details are not described here. Then, according to the head position pose data, the left and / or right eye viewpoints of the user are respectively determined in the three-dimensional virtual scene, and the base user viewpoint is determined according to the left and / or right eye viewpoints of the user.
可选的,本实施例中,可将用户的左眼视点作为基础用户视点,也可将用户的右眼视点作为基础用户视点,还可以选择用户的左眼视点和右眼视点的中间视点作为基础用户视点,本实施例不做限制。接着,在三维虚拟场景内选择与基础用户视点均匀分布的至少一个视点作为辅助用户视点。可选地,辅助用户视点的数量可以为两个,且这两个辅助用户视点可分别分布在基础用户视点的两侧。进而,一个基础用户视点以及两个辅助用户视点能够在确保后续得到的虚拟场景片段完全覆盖三维虚拟场景的前提下,提供最高的图像渲染效率和图像拼接效率。Optionally, in this embodiment, the user's left-eye viewpoint can be used as the basic user's viewpoint, the user's right-eye viewpoint can also be used as the basic user's viewpoint, and the middle viewpoint of the user's left-eye viewpoint and right-eye viewpoint can also be selected The viewpoint of the basic user is not limited in this embodiment. Then, in the three-dimensional virtual scene, at least one viewpoint uniformly distributed with the viewpoint of the base user is selected as the auxiliary user viewpoint. Optionally, the number of auxiliary user viewpoints may be two, and the two auxiliary user viewpoints may be respectively distributed on both sides of the base user viewpoint. Furthermore, one basic user viewpoint and two auxiliary user viewpoints can provide the highest image rendering efficiency and image stitching efficiency on the premise of ensuring that the subsequent virtual scene fragments completely cover the three-dimensional virtual scene.
一些实施例中,辅助用户视点数量为两个时,至少一个用户视点对应的视场角满足FOV>360°/N,N为用户视点数量;或者每个用户视点对应的视场角均满足FOV>360°/N,N为用户视点数量。In some embodiments, when the number of auxiliary user viewpoints is two, the field of view corresponding to at least one user viewpoint satisfies FOV> 360 ° / N, and N is the number of user viewpoints; or the field of view corresponding to each user viewpoint satisfies FOV > 360 ° / N, where N is the number of user viewpoints.
除此之外,在这种实施方式中,选取得到的至少两个不同用户视点与用户当前的头部位姿数据关联,有利于在后续过程中确定用户当前正在观看的虚拟场景片段,并得到更符合真实观看进度的VR场景截图。In addition, in this embodiment, the selected at least two different user viewpoints are associated with the user's current head and pose data, which is beneficial to determine the virtual scene clips that the user is currently watching in the subsequent process, and obtain Screenshots of VR scenes more in line with real viewing progress.
基于上述步骤选取出的至少两个用户视点,可确定与之对应的至少两个参考视场。为确保确定出的至少两个参考视场能够对三维虚拟场景的全景进行覆盖,可执行步骤202。在步骤202中,当所述至少两个用户视点包含的视点数量为N个时,每个用户视点的视场大于N>360°/N。例如,当N=3时,可设置每个用户视点的视场为130°或者150°,进而,相邻两个视场的视场边缘出现叠加,有利于后续进行图像拼接。Based on the at least two user viewpoints selected in the above steps, at least two reference fields of view corresponding thereto may be determined. To ensure that the determined at least two reference fields of view can cover the panorama of the three-dimensional virtual scene, step 202 may be performed. In step 202, when the number of viewpoints included in the at least two user viewpoints is N, the field of view of each user viewpoint is greater than N> 360 ° / N. For example, when N = 3, the field of view of each user's viewpoint can be set to 130 ° or 150 °. Furthermore, the edge of the field of view of two adjacent fields of view appears superimposed, which is beneficial for subsequent image stitching.
实际中,除了步骤201以及步骤202记载的实施方式之外,选择用户视点时,还可任意进行选择,不需考虑视点之间的均匀性,例如可在VR设备可展示的三维虚拟场景内随机选择多个用户视点。在这种方式中,每个用户视点对应的视场角的大小可根据该用户视点距其他用户视点的位置关系计算得到,且多个用户视点对应的视场角之和大于360°。例如,随机选择到的3个用户视点中,用户视点A和用户视点B之间的位置偏差较小,用户视点A以及用户视点B均和用户视点C具有较大的位置偏差,则可设置用户视点A和B具有较小的视场角,用户视点C具有较大的视场角,例如设置视点A对应的视场角为90°、设置视点B对应的视场角为120°,设置视点C对应的视场角为160°。在步骤203中,在确定至少两个不同用户视点的视场之后,可依次在该至少两个不同用户视点中的每个用户视点处进行场景渲染,以渲染得到所述至少两个参考视场对应的虚拟场景片段。In practice, in addition to the implementations described in steps 201 and 202, when selecting the user's viewpoint, you can also choose arbitrarily without considering the uniformity between viewpoints. For example, it can be randomly selected in a three-dimensional virtual scene that can be displayed by a VR device. Select multiple user viewpoints. In this manner, the size of the field of view corresponding to each user's viewpoint can be calculated according to the positional relationship between the user's viewpoint and other user's viewpoints, and the sum of the angles of view corresponding to multiple user viewpoints is greater than 360 °. For example, among the 3 user viewpoints randomly selected, the position deviation between the user viewpoint A and the user viewpoint B is small, and the user viewpoint A and the user viewpoint B have a large position deviation from the user viewpoint C, then the user may be set The viewpoints A and B have smaller field angles, and the user's viewpoint C has a larger field angle. For example, set the field angle corresponding to point A to 90 ° and set the field angle corresponding to point B to 120 °. The field of view corresponding to C is 160 °. In step 203, after determining the fields of view of at least two different user viewpoints, scene rendering may be performed at each of the user points of view of the at least two different user viewpoints in order to render the at least two reference fields of view. Corresponding virtual scene clip.
一些实施例中,以Unity 3D为例,当采用VR虚拟摄像头来模拟用户的眼睛时,可将用户视点作为VR虚拟摄像头的视点,将VR虚拟摄像头依次调整至每个不同用户视点处,并在每个不同用户视点处运行三维虚拟场景渲染程序,以渲染得到该用户视点的视场对应的虚拟场景片段。In some embodiments, taking Unity 3D as an example, when a VR virtual camera is used to simulate a user's eyes, the user's viewpoint can be used as the viewpoint of the VR virtual camera, and the VR virtual camera can be adjusted to each different user's viewpoint in turn, and A 3D virtual scene rendering program is run at each different user viewpoint to render a virtual scene fragment corresponding to the field of view of the user viewpoint.
在步骤204中,一些实施例中,获取至少两个参考视场各自对应的虚拟场景片段后,对上述虚拟场景片段进行边缘相似度检测。In step 204, in some embodiments, after acquiring virtual scene segments corresponding to at least two reference fields of view, edge similarity detection is performed on the virtual scene segments.
可选的,本步骤中,可采用图片相关系数法寻找边缘相似的虚拟场景片段。例如,可分别对每个虚拟场景片段的边缘进行识别,并基于识别的结果计算相邻两个虚拟场景片段的边缘相关系数,基于相关系数确定边缘相似的虚拟场景片段。当然,本实施例中,还可以采用其他的一些图片边缘相似度算法识别边缘相似的虚拟场景片段,例如欧式距离法、感知哈希法、基于滑动窗口的模板匹配法等,本实施例包含但不仅限于此。Optionally, in this step, a picture correlation coefficient method may be used to find virtual scene fragments with similar edges. For example, the edges of each virtual scene segment can be identified separately, and the edge correlation coefficients of two adjacent virtual scene segments are calculated based on the recognition results, and virtual scene segments with similar edges are determined based on the correlation coefficients. Of course, in this embodiment, other picture edge similarity algorithms can also be used to identify virtual scene fragments with similar edges, such as Euclidean distance method, perceptual hashing method, and sliding window-based template matching method. This embodiment includes but Not only that.
在步骤205中,一些实施例中,本步骤中,可将所述至少两个参考视场对应的虚拟场景片段中,边缘相似度大于设定阈值的虚拟场景片段进行拼接。如图2c所示,可将用户视点A、B、C对应的参考视场内的虚拟场景片段进行拼接,其中,同虚拟场景片段的叠加区域为其边缘相似区域。应当理解的是,图2c中示意了水平拼接的方式,实际中,还可包括垂直方向拼接或者其他度的拼接方式。In step 205, in some embodiments, in this step, among the virtual scene fragments corresponding to the at least two reference fields of view, the virtual scene fragments whose edge similarity is greater than a set threshold may be stitched. As shown in FIG. 2c, the virtual scene segments in the reference field of view corresponding to the user viewpoints A, B, and C can be stitched, where the overlapping area with the virtual scene segment is a region with similar edges. It should be understood that the method of horizontal splicing is illustrated in FIG. 2c. In practice, the method of vertical splicing or other degrees of splicing may also be included.
一些实施例中,在图像拼接时,可以基础用户视点对应的虚拟场景片段为中心进行拼接,使得拼接得到的VR场景截图以用户当前正在观看的虚拟场景片段为中心,更符合用户的实际观看效果。可选的,在这种实施方式中,可首先从至少两个参考视场对应的虚拟场景片段中,确定基础用户视点对应的虚拟场景片段为拼接中心;接着,根据辅助用户视点与基础用户视点之间的位置关系,确定辅助用户视点对应的虚拟场景片段与拼接中心的相对位置;接着,根据拼接中心以及相对位置,对至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图。例如,当基础用户视点为视点B时,辅助用户视点为视点B左右两侧的视点A以及视点C时,可将视点B对应的视场内的虚拟场景片段作为VR截图的正中心,在视点B对应的视场内的虚拟场景片段两侧分别拼接视点A以及视点C对应的视场内的虚拟场景片段。In some embodiments, when image stitching is performed, the virtual scene segment corresponding to the user's viewpoint can be used as a center to make the VR scene screenshot obtained by the stitching center on the virtual scene segment that the user is currently watching, which is more in line with the user's actual viewing effect. . Optionally, in this embodiment, the virtual scene segment corresponding to the basic user viewpoint may be first determined as the splicing center from the virtual scene segments corresponding to the at least two reference fields of view; then, according to the auxiliary user viewpoint and the basic user viewpoint, The positional relationship between them determines the relative position of the virtual scene segment corresponding to the auxiliary user's viewpoint and the stitching center; then, according to the stitching center and the relative position, image stitching is performed on the virtual scene segments corresponding to at least two reference fields of view to generate VR Scene screenshot. For example, when the base user's viewpoint is viewpoint B, and the auxiliary user's viewpoints are viewpoints A and C on the left and right sides of viewpoint B, the virtual scene fragment in the field of view corresponding to viewpoint B can be used as the center of the VR screenshot. On both sides of the virtual scene segment in the field of view corresponding to B, the virtual scene segments in the field of view corresponding to viewpoint A and viewpoint C are spliced, respectively.
在步骤206中,完成图像拼接后,还可将拼接得到的VR场景截图存储至所述VR设备的指定路径下,并通知用户已完成截屏操作和/或展示所述VR场景截图的所述指定路径。其中,通知用户已完成截屏操作的方式可以是语音方式或者文字方式,本实施例不做限制。In step 206, after the image stitching is completed, the spliced VR scene screenshot may also be stored in a specified path of the VR device, and the user is notified that the screen capture operation has been completed and / or the designation of the VR scene screenshot is displayed. path. The method for notifying the user that the screen capture operation has been completed may be a voice mode or a text mode, which is not limited in this embodiment.
本实施例中,在VR设备可展示的三维虚拟场景内选择均匀的至少两个参考视场,且相邻两个参考视场的视场边缘叠加;接着,获取至少两个参考视场各自对应的虚拟场景片段,并根据虚拟场景片段的边缘相似度对至少两个参考视场各自对应的虚拟场景片段进行图像拼接,进而,生成的VR场景截图与VR设备实际展示的三维虚拟场景匹配,有效捕捉了VR设备的真实展示内容。In this embodiment, at least two reference fields of view are uniformly selected in the three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed; then, at least two reference fields of view are obtained to correspond to each other Virtual scene clips, and image stitching of at least two reference scenes corresponding to the virtual scene clips according to the edge similarity of the virtual scene clips, and further, the generated VR scene screenshot matches the 3D virtual scene actually displayed by the VR device, which is effective Captured the real display content of VR devices.
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤201至步骤204的执行主体可以为设备A;又比如,步骤201和202的执行主体可以为设 备A,步骤203的执行主体可以为设备B;等等。It should be noted that the execution subject of each step of the method provided in the foregoing embodiment may be the same device, or the method may also use different devices as execution subjects. For example, the execution subject of steps 201 to 204 may be device A; for example, the execution subject of steps 201 and 202 may be device A, and the execution subject of step 203 may be device B; and so on.
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如201、202等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。In addition, some processes described in the above embodiments and drawings include multiple operations that occur in a specific order, but it should be clearly understood that these operations may be performed out of order or in parallel in this document The sequence numbers of operations such as 201 and 202 are only used to distinguish different operations. The sequence numbers themselves do not represent any execution order. In addition, these processes may include more or fewer operations, and these operations may be performed sequentially or in parallel.
以上实施例记载了本发明一些实施例提供的VR场景截图方法的一部分实施方式,该方法可由图3所示的VR设备实现,可选的,该VR设备包括:存储器301、处理器302、输入装置303以及输出装置304。The above embodiment describes a part of the implementation method of the VR scene screenshot method provided by some embodiments of the present invention. This method may be implemented by the VR device shown in FIG. 3. Optionally, the VR device includes: a memory 301, a processor 302, and an input. Device 303 and output device 304.
存储器301、处理器302、输入装置303以及输出装置304可以通过总线或其他方式连接,图中以总线连接为例。在其他未进行图示的连接方式中,存储器301可直接与处理器302耦合连接,输入装置303以及输出装置304可通过数据线和数据接口与处理器302直接或者间接连接。当然,上述连接方式仅用于示例性说明,对本发明实施例的保护范围不构成任何限制。The memory 301, the processor 302, the input device 303, and the output device 304 may be connected by using a bus or other methods. In the figure, the bus connection is taken as an example. In other connection modes not shown, the memory 301 may be directly coupled to the processor 302, and the input device 303 and the output device 304 may be directly or indirectly connected to the processor 302 through a data line and a data interface. Of course, the above connection mode is only used for illustrative description, and does not constitute any limitation on the protection scope of the embodiment of the present invention.
存储器301用于存储一条或多条计算机指令,并可被配置为存储其它各种数据以支持在VR设备上的操作。这些数据的示例包括用于在VR设备上操作的任何应用程序或方法的指令。The memory 301 is used to store one or more computer instructions, and may be configured to store various other data to support operations on the VR device. Examples of such data include instructions for any application or method for operating on a VR device.
存储器301可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 301 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Programming read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
在一些实施例中,存储器301可选包括相对于处理器302远程设置的存储器,这些远程存储器可以通过网络连接至AR显示设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。In some embodiments, the memory 301 may optionally include a memory remotely set relative to the processor 302, and these remote memories may be connected to the AR display device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
处理器302,与存储器301耦合,用于执行所述一条或多条计算机指令以用于:响应于截图指令,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加;获取所述至少两个参考视场各自对应的虚拟场景片段;对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图。A processor 302 coupled to the memory 301 and configured to execute the one or more computer instructions for: in response to a screenshot instruction, selecting at least two reference fields of view within a three-dimensional virtual scene that can be displayed by the VR device, and adjacent to each other The edges of the two reference fields of view are superimposed; the virtual scene segments corresponding to the at least two reference fields of view are obtained; image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot .
进一步可选地,在VR设备展示的三维虚拟场景内选择至少两个参考视场 时,处理器302具体用于:在所述三维虚拟场景内均匀选取至少两个不同用户视点;对所述至少两个不同用户视点的视场角FOV进行设置,以得到至少两个参考视场,其中,至少一个视场角满足FOV>360°/N,N为所述至少两个用户视点包含的用户视点数量。Further optionally, when at least two reference fields of view are selected in the three-dimensional virtual scene displayed by the VR device, the processor 302 is specifically configured to: uniformly select at least two different user viewpoints in the three-dimensional virtual scene; The FOVs of two different user viewpoints are set to obtain at least two reference FOVs, where at least one FOV satisfies FOV> 360 ° / N, where N is the user viewpoint contained in the at least two user viewpoints Quantity.
进一步可选地,在所述三维虚拟场景内均匀选取至少两个不同用户视点时,处理器302具体用于:获取用户当前的头部位姿数据;根据所述头部位姿数据,在所述三维虚拟场景内分别确定所述用户的左眼和/或右眼视点;根据所述用户的左眼和/或右眼视点,确定基础用户视点;在所述三维虚拟场景内选择与所述基础用户视点均匀分布的至少一个视点作为辅助用户视点。Further optionally, when at least two different user viewpoints are uniformly selected in the three-dimensional virtual scene, the processor 302 is specifically configured to: obtain current head position pose data of the user; Determining the left and / or right eye viewpoints of the user in the three-dimensional virtual scene; determining the base user viewpoints according to the left and / or right eye viewpoints of the user; At least one viewpoint with evenly distributed base user viewpoints is used as the auxiliary user viewpoint.
进一步可选地,所述辅助用户视点的数量为两个。Further optionally, the number of the auxiliary user viewpoints is two.
进一步可选地,获取所述至少两个参考视场各自对应的虚拟场景片段时,处理器302具体用于:依次在所述至少两个不同用户视点中的每个用户视点处进行场景渲染,以渲染得到所述至少两个参考视场对应的虚拟场景片段。Further optionally, when acquiring virtual scene fragments corresponding to the at least two reference fields of view, the processor 302 is specifically configured to perform scene rendering at each user viewpoint in the at least two different user viewpoints in sequence, A virtual scene segment corresponding to the at least two reference fields of view is obtained by rendering.
进一步可选地,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图时,处理器302具体用于:对所述至少两个参考视场各自对应的虚拟场景片段进行边缘相似度检测;根据所述边缘相似点检测的检测结果,将所述至少两个参考视场对应的虚拟场景片段中,边缘相似度大于设定阈值的虚拟场景片段进行拼接。Further optionally, when image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate VR scene screenshots, the processor 302 is specifically configured to: The scene segments are detected for edge similarity. According to the detection result of the edge similarity detection, virtual scene segments whose edge similarity is greater than a set threshold among the virtual scene segments corresponding to the at least two reference fields of view are stitched.
进一步可选地,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图时,处理器302具体用于:从所述至少两个参考视场对应的虚拟场景片段中,确定所述基础用户视点对应的虚拟场景片段为拼接中心;根据所述辅助用户视点与所述基础用户视点之间的位置关系,确定所述辅助用户视点对应的虚拟场景片段与所述拼接中心的相对位置;根据所述拼接中心以及所述相对位置,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成所述VR场景截图。Further optionally, when image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot, the processor 302 is specifically configured to: from the virtual scenes corresponding to the at least two reference fields of view Among the fragments, it is determined that the virtual scene fragment corresponding to the basic user viewpoint is a splicing center; and according to a positional relationship between the auxiliary user viewpoint and the basic user viewpoint, determining the virtual scene fragment corresponding to the auxiliary user viewpoint and the Relative position of the stitching center; according to the stitching center and the relative position, image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a screenshot of the VR scene.
进一步可选地,处理器302还用于:将所述VR场景截图存储至所述VR设备的指定路径下,并通知用户已完成截屏操作和/或展示所述VR场景截图的所述指定路径。Further optionally, the processor 302 is further configured to: store the VR scene screenshot to a specified path of the VR device, and notify the user that the screen capture operation has been completed and / or display the designated path of the VR scene screenshot .
其中,输入装置303可接收输入的数字或字符信息,以及产生与VR设备的用户设置以及功能控制有关的键信号输入。输出装置304可包括显示屏等 显示设备。The input device 303 can receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the VR device. The output device 304 may include a display device such as a display screen.
进一步,如图3所示,该VR设备还包括:电源组件305。电源组件305,为电源组件所在设备的各种组件提供电力。电源组件可以包括电源管理系统,一个或多个电源,及其他与为电源组件所在设备生成、管理和分配电力相关联的组件。Further, as shown in FIG. 3, the VR device further includes: a power supply component 305. The power source component 305 provides power to various components of the device where the power source component is located. Power components can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the equipment in which the power components are located.
上述VR设备可执行本申请实施例所提供的VR场景截图方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法,不再赘述。The above VR device can execute the VR scene screenshot method provided by the embodiment of the present application, and has corresponding function modules and beneficial effects of executing the method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided in this embodiment of the application, and details are not described herein again.
本发明还提供一种存储有计算机程序的计算机可读存储介质,所述计算机程序被执行时能够实现上述VR设备能够执行的方法中的步骤。The present invention also provides a computer-readable storage medium storing a computer program, and the computer program, when executed, can implement the steps in the method that the VR device can execute.
本发明一些实施例提供的VR设备可以为外接式头戴显示设备或者一体式头戴显示设备,其中外接式头戴显示设备可以与外部处理系统(例如计算机处理系统)配合使用。The VR device provided by some embodiments of the present invention may be an external head-mounted display device or an integrated head-mounted display device, wherein the external head-mounted display device may be used in conjunction with an external processing system (such as a computer processing system).
图4示出了一些实施例中VR设备400的内部配置结构示意图。FIG. 4 shows a schematic diagram of the internal configuration of the VR device 400 in some embodiments.
显示单元401可以包括显示面板,显示面板设置在VR设备400上面向用户面部的侧表面,可以为一整块面板、或者为分别对应用户左眼和右眼的左面板和右面板。显示面板可以为电致发光(EL)元件、液晶显示器或具有类似结构的微型显示器、或者视网膜可直接显示或类似的激光扫描式显示器。The display unit 401 may include a display panel. The display panel is disposed on a side surface of the VR device 400 that faces the user's face, and may be a whole panel, or a left panel and a right panel corresponding to the left and right eyes of the user, respectively. The display panel may be an electroluminescence (EL) element, a liquid crystal display, or a micro-display with a similar structure, or a laser scanning display with a direct retina display or the like.
虚拟图像光学单元402以放大方式向用户展示显示单元401所显示的图像,并允许用户按放大的虚拟图像观察所显示的图像。作为输出到显示单元401上的显示图像,可以是从内容再现设备(蓝光光碟或DVD播放器)或流媒体服务器提供的虚拟场景的图像、或者使用外部相机410拍摄的现实场景的图像。一些实施例中,虚拟图像光学单元402可以包括透镜单元,例如球面透镜、非球面透镜、菲涅尔透镜等。The virtual image optical unit 402 displays the image displayed by the display unit 401 to the user in an enlarged manner, and allows the user to observe the displayed image in accordance with the enlarged virtual image. As a display image output to the display unit 401, it may be an image of a virtual scene provided from a content reproduction device (Blu-ray disc or DVD player) or a streaming server, or an image of a real scene captured using an external camera 410. In some embodiments, the virtual image optical unit 402 may include a lens unit, such as a spherical lens, an aspherical lens, a Fresnel lens, and the like.
输入操作单元403包括至少一个用来执行输入操作的操作部件,例如按键、按钮、开关或者其他具有类似功能的部件,通过操作部件接收用户指令,并且向控制单元407输出指令。The input operation unit 403 includes at least one operation component for performing an input operation, such as a key, a button, a switch, or other components having similar functions, receives a user instruction through the operation component, and outputs the instruction to the control unit 407.
状态信息获取单元404用于获取穿戴VR设备400的用户的状态信息。状态信息获取单元404可以包括各种类型的传感器,用于自身检测状态信息,并可以通过通信单元405从外部设备(例如智能手机、腕表和用户穿戴的其它 多功能终端)获取状态信息。状态信息获取单元404可以获取用户的头部的位置信息和/或姿态信息。状态信息获取单元404可以包括陀螺仪传感器、加速度传感器、全球定位系统(GPS)传感器、地磁传感器、多普勒效应传感器、红外传感器、射频场强度传感器中的一个或者多个。此外,状态信息获取单元404获取穿戴VR设备400的用户的状态信息,例如获取例如用户的操作状态(用户是否穿戴VR设备400)、用户的动作状态(诸如静止、行走、跑动和诸如此类的移动状态,手或指尖的姿势、眼睛的开或闭状态、视线方向、瞳孔尺寸)、精神状态(用户是否沉浸在观察所显示的图像以及诸如此类的),甚至生理状态。The status information acquiring unit 404 is configured to acquire status information of a user wearing the VR device 400. The status information obtaining unit 404 may include various types of sensors for detecting status information by itself, and may obtain status information from external devices (such as a smartphone, a wristwatch, and other multifunctional terminals worn by the user) through the communication unit 405. The state information acquisition unit 404 may acquire position information and / or posture information of a user's head. The status information acquisition unit 404 may include one or more of a gyroscope sensor, an acceleration sensor, a global positioning system (GPS) sensor, a geomagnetic sensor, a Doppler effect sensor, an infrared sensor, and a radio frequency field strength sensor. In addition, the status information acquisition unit 404 acquires status information of a user wearing the VR device 400, such as acquiring, for example, an operation status of the user (whether the user is wearing the VR device 400), an action status of the user (such as stationary, walking, running, and the like) State, hand or fingertip posture, eye open or closed state, line of sight, pupil size), mental state (whether the user is immersed in observing the displayed image and the like), and even a physiological state.
通信单元405执行与外部装置的通信处理、调制和解调处理、以及通信信号的编码和解码处理。另外,控制单元407可以从通信单元405向外部装置发送传输数据。通信方式可以是有线或者无线形式,例如移动高清链接(MHL)或通用串行总线(USB)、高清多媒体接口(HDMI)、无线保真(Wi-Fi)、蓝牙通信或低功耗蓝牙通信,以及IEEE802.11s标准的网状网络等。另外,通信单元405可以是根据宽带码分多址(W-CDMA)、长期演进(LTE)和类似标准操作的蜂窝无线收发器。The communication unit 405 performs a communication process with an external device, a modulation and demodulation process, and an encoding and decoding process of a communication signal. In addition, the control unit 407 may transmit transmission data from the communication unit 405 to an external device. The communication method can be wired or wireless, such as mobile high-definition link (MHL) or universal serial bus (USB), high-definition multimedia interface (HDMI), wireless fidelity (Wi-Fi), Bluetooth communication or Bluetooth low energy communication, And the IEEE802.11s standard mesh network. In addition, the communication unit 405 may be a cellular wireless transceiver operating according to Wideband Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and similar standards.
一些实施例中,VR设备400还可以包括存储单元,存储单元406是配置为具有固态驱动器(SSD)等的大容量存储设备。一些实施例中,存储单元406可以存储应用程序或各种类型的数据。例如,用户使用VR设备400观看的内容可以存储在存储单元406中。In some embodiments, the VR device 400 may further include a storage unit, and the storage unit 406 is a mass storage device configured with a solid state drive (SSD) or the like. In some embodiments, the storage unit 406 may store an application program or various types of data. For example, content viewed by a user using the VR device 400 may be stored in the storage unit 406.
一些实施例中,VR设备400还可以包括控制单元和存储单元(例如图示的ROM407A以及RAM407B),控制单元407可以包括计算机处理单元(CPU)或者其他具有类似功能的设备。一些实施例中,控制单元407可以用于执行存储单元406存储的应用程序,或者控制单元407还可以用于执行本发明一些实施例公开的方法、功能和操作的电路。In some embodiments, the VR device 400 may further include a control unit and a storage unit (for example, the ROM 407A and the RAM 407B shown in the figure). The control unit 407 may include a computer processing unit (CPU) or other devices with similar functions. In some embodiments, the control unit 407 may be used to execute an application program stored in the storage unit 406, or the control unit 407 may also be used to execute the methods, functions, and circuits disclosed in some embodiments of the present invention.
图像处理单元408用于执行信号处理,比如与从控制单元407输出的图像信号相关的图像质量校正,以及将其分辨率转换为根据显示单元401的屏幕的分辨率。然后,显示驱动单元409依次选择显示单元401的每行像素,并逐行依次扫描显示单元401的每行像素,因而提供基于经信号处理的图像信号的像素信号。The image processing unit 408 is used to perform signal processing, such as image quality correction related to the image signal output from the control unit 407, and to convert its resolution to a resolution according to the screen of the display unit 401. Then, the display driving unit 409 sequentially selects each row of pixels of the display unit 401 and sequentially scans each row of pixels of the display unit 401 one by one, thereby providing a pixel signal based on the image signal subjected to signal processing.
一些实施例中,VR设备400还可以包括外部相机。外部相机410可以设置在VR设备400主体前表面,外部相机410可以为一个或者多个。外部相机410可以获取三维信息,并且也可以用作距离传感器。另外,探测来自物体的反射信号的位置灵敏探测器(PSD)或者其他类型的距离传感器可以与外部相机410一起使用。外部相机410和距离传感器可以用于检测穿戴VR设备400的用户的身体位置、姿态和形状。另外,一定条件下用户可以通过外部相机410直接观看或者预览现实场景。In some embodiments, the VR device 400 may further include an external camera. The external cameras 410 may be disposed on the front surface of the main body of the VR device 400, and the external cameras 410 may be one or more. The external camera 410 can acquire three-dimensional information and can also be used as a distance sensor. In addition, a position sensitive detector (PSD) or other type of distance sensor that detects a reflected signal from an object may be used with the external camera 410. The external camera 410 and the distance sensor may be used to detect a body position, posture, and shape of a user wearing the VR device 400. In addition, under certain conditions, the user can directly view or preview the real scene through the external camera 410.
一些实施例中,VR设备400还可以包括声音处理单元,声音处理单元411可以执行从控制单元407输出的声音信号的声音质量校正或声音放大,以及输入声音信号的信号处理等。然后,声音输入/输出单元412在声音处理后向外部输出声音以及输入来自麦克风的声音。In some embodiments, the VR device 400 may further include a sound processing unit, and the sound processing unit 411 may perform sound quality correction or sound amplification of a sound signal output from the control unit 407, signal processing of an input sound signal, and the like. Then, the sound input / output unit 412 outputs sound to the outside after sound processing and inputs sound from a microphone.
需要说明的是,图4中虚线框示出的结构或部件可以独立于VR设备400之外,例如可以设置在外部处理系统(例如计算机系统)中与VR设备400配合使用;或者,虚线框示出的结构或部件可以设置在VR设备400内部或者表面上。It should be noted that the structure or component shown by the dashed box in FIG. 4 may be independent of the VR device 400, for example, it may be provided in an external processing system (such as a computer system) for use with the VR device 400; or The resulting structure or component may be disposed inside or on the surface of the VR device 400.
本领域普通技术人员还可以理解,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can also understand that the units and algorithm steps of each example described in combination with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly explain the hardware and software Interchangeability. In the above description, the composition and steps of each example have been described generally in terms of functions. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented by hardware, a software module executed by a processor, or a combination of the two. Software modules can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or in technical fields Any other form of storage medium known in the art.
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包 括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should also be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities or operations There is any such actual relationship or order among them. Moreover, the terms "including", "comprising", or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article, or device that includes a series of elements includes not only those elements but also those that are not explicitly listed Or other elements inherent to such a process, method, article, or device. Without more restrictions, the elements defined by the sentence "including a ..." do not exclude the existence of other identical elements in the process, method, article, or equipment including the elements.

Claims (10)

  1. 一种VR场景截图方法,其特征在于,包括:A VR scene screenshot method includes:
    响应于截图指令,在VR设备可展示的三维虚拟场景内选择至少两个参考视场,且相邻两个参考视场的视场边缘叠加;In response to the screenshot instruction, select at least two reference fields of view within a three-dimensional virtual scene that can be displayed by the VR device, and the edges of the two adjacent reference fields of view are superimposed;
    获取所述至少两个参考视场各自对应的虚拟场景片段;Acquiring virtual scene fragments corresponding to the at least two reference fields of view;
    对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图。Image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
  2. 根据权利要求1所述的方法,其特征在于,在VR设备展示的三维虚拟场景内选择至少两个参考视场,包括:The method according to claim 1, wherein selecting at least two reference fields of view within a three-dimensional virtual scene displayed by a VR device comprises:
    在所述三维虚拟场景内均匀选取至少两个不同用户视点;Uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene;
    对所述至少两个不同用户视点的视场角FOV进行设置,以得到至少两个参考视场,其中,至少一个视场角满足FOV>360°/N,N为所述至少两个用户视点包含的用户视点数量。Setting the field angle FOV of the at least two different user viewpoints to obtain at least two reference fields of view, where at least one field of view satisfies FOV> 360 ° / N, where N is the at least two user viewpoints The number of user viewpoints included.
  3. 根据权利要求2所述的方法,其特征在于,在所述三维虚拟场景内均匀选取至少两个不同用户视点,包括:The method according to claim 2, wherein uniformly selecting at least two different user viewpoints in the three-dimensional virtual scene comprises:
    获取用户当前的头部位姿数据;Get the user's current head position pose data;
    根据所述头部位姿数据,在所述三维虚拟场景内分别确定所述用户的左眼和/或右眼视点;Determining the left-eye and / or right-eye viewpoints of the user in the three-dimensional virtual scene respectively according to the head position pose data;
    根据所述用户的左眼和/或右眼视点,确定基础用户视点;Determining the base user viewpoint according to the left eye and / or right eye viewpoint of the user;
    在所述三维虚拟场景内选择与所述基础用户视点均匀分布的至少一个视点作为辅助用户视点。At least one viewpoint that is evenly distributed with the basic user viewpoint is selected as the auxiliary user viewpoint within the three-dimensional virtual scene.
  4. 根据权利要求3所述的方法,其特征在于,所述辅助用户视点的数量为两个。The method according to claim 3, wherein the number of the auxiliary user viewpoints is two.
  5. 根据权利要求2-4任一项所述的方法,其特征在于,获取所述至少两个参考视场各自对应的虚拟场景片段,包括:The method according to any one of claims 2-4, wherein acquiring virtual scene segments corresponding to the at least two reference fields of view comprises:
    依次在所述至少两个不同用户视点中的每个用户视点处进行场景渲染,以渲染得到所述至少两个参考视场对应的虚拟场景片段。Scene rendering is performed sequentially at each of the user viewpoints of the at least two different user viewpoints to render a virtual scene segment corresponding to the at least two reference viewpoints.
  6. 根据权利要求1-4任一项所述的方法,其特征在于,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图,包括:The method according to any one of claims 1-4, wherein performing image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes:
    对所述至少两个参考视场各自对应的虚拟场景片段进行边缘相似度检 测;Performing edge similarity detection on the virtual scene segments corresponding to the at least two reference fields of view;
    根据所述边缘相似点检测的检测结果,将所述至少两个参考视场对应的虚拟场景片段中,边缘相似度大于设定阈值的虚拟场景片段进行拼接。According to the detection result of the edge similarity detection, among the virtual scene fragments corresponding to the at least two reference fields of view, the virtual scene fragments whose edge similarity is greater than a set threshold are stitched.
  7. 根据权利要求3或4所述的方法,其特征在于,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成VR场景截图,包括:The method according to claim 3 or 4, wherein performing image stitching on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot includes:
    从所述至少两个参考视场对应的虚拟场景片段中,确定所述基础用户视点对应的虚拟场景片段为拼接中心;Determining, from the virtual scene fragments corresponding to the at least two reference views, the virtual scene fragments corresponding to the basic user viewpoint as a splicing center;
    根据所述辅助用户视点与所述基础用户视点之间的位置关系,确定所述辅助用户视点对应的虚拟场景片段与所述拼接中心的相对位置;Determining a relative position of the virtual scene segment corresponding to the auxiliary user viewpoint and the splicing center according to a positional relationship between the auxiliary user viewpoint and the basic user viewpoint;
    根据所述拼接中心以及所述相对位置,对所述至少两个参考视场各自对应的虚拟场景片段进行图像拼接,生成所述VR场景截图。According to the stitching center and the relative position, image stitching is performed on the virtual scene segments corresponding to the at least two reference fields of view to generate a VR scene screenshot.
  8. 根据权利要求1-4任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-4, further comprising:
    将所述VR场景截图存储至所述VR设备的指定路径下,并通知用户已完成截屏操作和/或展示所述VR场景截图的所述指定路径。The screenshot of the VR scene is stored in a designated path of the VR device, and the user is notified that the screen capture operation has been completed and / or the designated path of the VR scene screenshot is displayed.
  9. 一种VR设备,其特征在于,包括:存储器和处理器;A VR device, comprising: a memory and a processor;
    其中,所述存储器用于存储一条或多条计算机指令;The memory is used to store one or more computer instructions;
    所述处理器与所述存储器耦合,用于执行所述一条或多条计算机指令以用于执行权利要求1-8任一项所述的VR场景截图方法。The processor is coupled to the memory, and is configured to execute the one or more computer instructions for executing the VR scene screenshot method according to any one of claims 1-8.
  10. 一种存储有计算机程序的计算机可读存储介质,其特征在于,所述计算机程序被执行时能够实现权利要求1-8任一项所述方法中的步骤。A computer-readable storage medium storing a computer program, wherein when the computer program is executed, the steps in the method according to any one of claims 1-8 can be realized.
PCT/CN2018/123764 2018-08-31 2018-12-26 Method for screenshot of vr scene, device and storage medium WO2020042494A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811013485.2 2018-08-31
CN201811013485.2A CN109002248B (en) 2018-08-31 2018-08-31 VR scene screenshot method, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2020042494A1 true WO2020042494A1 (en) 2020-03-05

Family

ID=64591425

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123764 WO2020042494A1 (en) 2018-08-31 2018-12-26 Method for screenshot of vr scene, device and storage medium

Country Status (2)

Country Link
CN (1) CN109002248B (en)
WO (1) WO2020042494A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002248B (en) * 2018-08-31 2021-07-20 歌尔光学科技有限公司 VR scene screenshot method, equipment and storage medium
CN114697302A (en) * 2020-12-31 2022-07-01 伊姆西Ip控股有限责任公司 Method for distributing virtual visual content
CN114286142B (en) * 2021-01-18 2023-03-28 海信视像科技股份有限公司 Virtual reality equipment and VR scene screen capturing method
CN112732088B (en) * 2021-01-18 2023-01-20 海信视像科技股份有限公司 Virtual reality equipment and monocular screen capturing method
CN113126942B (en) * 2021-03-19 2024-04-30 北京城市网邻信息技术有限公司 Method and device for displaying cover picture, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835118A (en) * 2015-06-04 2015-08-12 浙江得图网络有限公司 Method for acquiring panorama image by using two fish-eye camera lenses
CN105959666A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Method and device for sharing 3d image in virtual reality system
US20160353090A1 (en) * 2015-05-27 2016-12-01 Google Inc. Omnistereo capture and render of panoramic virtual reality content
CN109002248A (en) * 2018-08-31 2018-12-14 歌尔科技有限公司 VR scene screenshot method, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138245B (en) * 2015-09-30 2018-06-29 北京奇虎科技有限公司 A kind of duplicate removal treatment method and device of intelligent terminal screenshot picture
CN105847672A (en) * 2016-03-07 2016-08-10 乐视致新电子科技(天津)有限公司 Virtual reality helmet snapshotting method and system
US20180068574A1 (en) * 2016-09-08 2018-03-08 Jim LaCrosse Method of and system for facilitating structured block play in a virtual reality environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160353090A1 (en) * 2015-05-27 2016-12-01 Google Inc. Omnistereo capture and render of panoramic virtual reality content
CN104835118A (en) * 2015-06-04 2015-08-12 浙江得图网络有限公司 Method for acquiring panorama image by using two fish-eye camera lenses
CN105959666A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Method and device for sharing 3d image in virtual reality system
CN109002248A (en) * 2018-08-31 2018-12-14 歌尔科技有限公司 VR scene screenshot method, equipment and storage medium

Also Published As

Publication number Publication date
CN109002248A (en) 2018-12-14
CN109002248B (en) 2021-07-20

Similar Documents

Publication Publication Date Title
WO2020042494A1 (en) Method for screenshot of vr scene, device and storage medium
US10832448B2 (en) Display control device, display control method, and program
TW202013149A (en) Augmented reality image display method, device and equipment
US20130215149A1 (en) Information presentation device, digital camera, head mount display, projector, information presentation method and non-transitory computer readable medium
JP6720341B2 (en) Virtual reality device and method for adjusting its contents
JP6126271B1 (en) Method, program, and recording medium for providing virtual space
WO2018000619A1 (en) Data display method, device, electronic device and virtual reality device
JP2016126042A (en) Image display system, image display device, image display method and program
WO2019015249A1 (en) Virtual-reality-based image display method and apparatus, and virtual reality helmet device
EP3062506B1 (en) Image switching method and apparatus
CN113412479A (en) Mixed reality display device and mixed reality display method
JP6126272B1 (en) Method, program, and recording medium for providing virtual space
US20210383097A1 (en) Object scanning for subsequent object detection
US11113379B2 (en) Unlocking method and virtual reality device
CN112702533B (en) Sight line correction method and sight line correction device
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
JPWO2019026919A1 (en) Image processing system, image processing method, and program
JP2017208808A (en) Method of providing virtual space, program, and recording medium
EP3805899A1 (en) Head mounted display system and scene scanning method thereof
US20200342833A1 (en) Head mounted display system and scene scanning method thereof
US10937174B2 (en) Image processing device, image processing program, and recording medium
CN114339029B (en) Shooting method and device and electronic equipment
CN111736692B (en) Display method, display device, storage medium and head-mounted device
JP6031016B2 (en) Video display device and video display program
CN108965859B (en) Projection mode identification method, video playing method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18931380

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18931380

Country of ref document: EP

Kind code of ref document: A1