WO2024060959A1 - Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device - Google Patents

Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device Download PDF

Info

Publication number
WO2024060959A1
WO2024060959A1 PCT/CN2023/116228 CN2023116228W WO2024060959A1 WO 2024060959 A1 WO2024060959 A1 WO 2024060959A1 CN 2023116228 W CN2023116228 W CN 2023116228W WO 2024060959 A1 WO2024060959 A1 WO 2024060959A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
virtual environment
virtual
viewing area
drag
Prior art date
Application number
PCT/CN2023/116228
Other languages
French (fr)
Chinese (zh)
Inventor
庞娜
杨毅平
方迟
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024060959A1 publication Critical patent/WO2024060959A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • Embodiments of the present disclosure relate to a method, device, storage medium and equipment for adjusting viewing images in a virtual environment.
  • Embodiments of the present disclosure provide a method, device, storage medium, and equipment for adjusting viewing images in a virtual environment.
  • a drag and drop method for three-dimensional space of 2D video, VR180 video, and VR360 video is designed to allow users to experience the VR space.
  • the charm and different viewing perspectives in the video field enhance the immersive experience of watching movies in the virtual reality space.
  • embodiments of the present disclosure provide a method for adjusting a viewing image in a virtual environment.
  • the method includes:
  • the video picture displayed in the video viewing area is determined.
  • dragging the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle includes:
  • the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
  • the viewing mode of the virtual environment is a two-dimensional video viewing mode
  • the video viewing area is a virtual screen
  • the virtual screen is in a display state before responding to the drag control information.
  • the dragging of the video viewing area along the spherical surface of the spherical space includes:
  • the method further includes:
  • the virtual screen is controlled to flip 180 degrees around the center of the virtual screen.
  • the method further includes:
  • the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and a mold-crossing situation occurs between the virtual screen and the virtual ground, the virtual ground is hidden.
  • the method further includes:
  • the frame of the virtual screen is controlled to return to normal display.
  • determining the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment includes:
  • the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen is displayed at the current drag position.
  • the video picture and the video picture displayed on the virtual screen at the drag starting position are both two-dimensional videos played in full screen.
  • the video viewing area is a viewing frame of a preset ratio.
  • the viewfinder frame is in a hidden state;
  • control the viewfinder frame In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment
  • the method further includes:
  • the method further includes:
  • the method before responding to the reset visual field control instruction, the method further includes:
  • Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
  • determining the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment includes:
  • the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
  • the method further includes: during the dragging process, hiding the ray cursor and displaying the cursor focus of the ray cursor located on the video viewing area.
  • the method further includes:
  • the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
  • the method further includes:
  • the drag control information is drag control information generated based on the object's bare hand gesture, then in response to the drag control information, voice prompt information is issued, and the voice prompt information is used to prompt that the drag operation is triggered.
  • inventions of the present disclosure provide a device for adjusting viewing images in a virtual environment.
  • the device includes:
  • a display unit is used to display a virtual environment.
  • a ray cursor and a video viewing area are presented in the virtual environment.
  • the ray cursor points in the direction of the video viewing area and forms a space between the ray cursor and the video viewing area.
  • a control unit configured to respond to the drag control information and drag the video viewing area based on the initial position of the ray cursor and the first angle
  • a determining unit configured to determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
  • embodiments of the present disclosure provide a computer-readable storage medium that stores a computer program, and the computer program is suitable for loading by a processor to execute the steps described in any of the above embodiments.
  • inventions of the present disclosure provide a virtual reality device.
  • the virtual reality device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the processor invokes the computer program stored in the memory.
  • embodiments of the present disclosure provide a computer program product, including a computer program.
  • the computer program is executed by a processor, the method for adjusting a viewing image in a virtual environment as described in any of the above embodiments is implemented.
  • FIG1 is a schematic diagram of a flow chart of a method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of the first application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure
  • FIG3 is a schematic diagram of a second application scenario of the method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure
  • Figure 4 is a schematic diagram of the third application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure
  • Figure 5 is a schematic diagram of the fourth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure
  • Figure 6 is a schematic diagram of the fifth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure
  • Figure 7 is a schematic diagram of the sixth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic diagram of the seventh application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure
  • Figure 9 is a schematic diagram of the eighth application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure.
  • Figure 10 is a schematic diagram of the ninth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure.
  • Figure 11 is a schematic diagram of the tenth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure
  • Figure 12 is a schematic diagram of an eleventh application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure
  • FIG13 is a schematic diagram of a twelfth application scenario of the method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure
  • FIG14 is a schematic diagram of the structure of a device for adjusting a viewing image in a virtual environment provided by an embodiment of the present disclosure
  • Figure 15 is a first structural schematic diagram of a virtual reality device provided by an embodiment of the present disclosure.
  • Figure 16 is a second structural schematic diagram of a virtual reality device provided by an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide a method, device, computer-readable storage medium, virtual reality device, server, and computer program product for adjusting a viewing image in a virtual environment.
  • the viewing picture adjustment method in the virtual environment of the embodiment of the present disclosure can be executed by a virtual reality device or a server.
  • the disclosed embodiments can be applied to various application scenarios such as Extended Reality (XR), Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
  • XR Extended Reality
  • VR Virtual Reality
  • AR Augmented Reality
  • MR Mixed Reality
  • Extended Reality is a concept that includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR), which means that the virtual world is connected to the real world. environment, technology that enables users to interact with that environment in real time.
  • Virtual Reality a technology for creating and experiencing virtual worlds, calculates and generates a virtual environment, which is a kind of multi-source information (the virtual reality mentioned in this article at least includes visual perception, and can also include auditory perception, Tactile perception, motion perception, and even taste perception, smell perception, etc.), realizing the integration of virtual environment, interactive three-dimensional dynamic vision and simulation of entity behavior, immersing users in simulated virtual reality In the environment, it can be used in various virtual environments such as maps, games, videos, education, medical care, simulation, collaborative training, sales, assisted manufacturing, maintenance and repair, etc.
  • the virtual reality mentioned in this article at least includes visual perception, and can also include auditory perception, Tactile perception, motion perception, and even taste perception, smell perception, etc.
  • Augmented Reality is a method in which the camera's posture parameters in the real world (or three-dimensional world, real world) are calculated in real time during the process of camera image collection.
  • Virtual elements include, but are not limited to: images, videos, and three-dimensional models.
  • the goal of AR technology is to connect the virtual world to the real world on the screen for interaction.
  • MR Mixed Reality
  • the computer-created sensory input can be adapted to the sensory input from the physical set. Changes in sensory input to the physical set.
  • some electronic systems used to render the MR scenery may monitor orientation and/or position relative to the physical scenery to enable virtual objects to interact with real objects (i.e., physical elements from the physical scenery or representations thereof). For example, the system can monitor motion so that virtual plants appear stationary relative to physical buildings.
  • Augmented Virtuality refers to a computer-created scenery or virtual scenery that incorporates at least one sensory input from the physical scenery to simulate the scenery.
  • the one or more sensory inputs from the physical setting may be a representation of at least one feature of the physical setting.
  • a virtual object may take on the color of a physical element captured by one or more imaging sensors.
  • virtual objects may exhibit characteristics consistent with actual weather conditions in the physical scene, as identified via weather-related imaging sensors and/or online weather data.
  • an augmented reality forest can have virtual trees and structures, but the animals can have features accurately recreated from images taken of physical animals.
  • Virtual field of view the area in the virtual environment that the user can perceive through the lens in the virtual reality device, uses the field of view (Field Of View, FOV) of the virtual field of view to represent the perceived area.
  • FOV Field Of View
  • Virtual reality equipment a terminal that realizes virtual reality effects, can usually be provided in the form of glasses, helmet-mounted displays (HMD), and contact lenses to achieve visual perception and other forms of perception.
  • HMD helmet-mounted displays
  • contact lenses to achieve visual perception and other forms of perception.
  • virtual reality equipment realizes The form is not limited to this, and can be further miniaturized or enlarged as needed.
  • the virtual reality devices recorded in the embodiments of the present disclosure may include but are not limited to the following: type:
  • PCVR Computer-based virtual reality
  • External computer-based virtual reality devices use the data output by the PC to achieve virtual reality effects.
  • Mobile virtual reality equipment supports setting up a mobile terminal (such as a smartphone) in various ways (such as a head-mounted display with a special card slot), and through a wired or wireless connection with the mobile terminal, the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
  • a mobile terminal such as a smartphone
  • ways such as a head-mounted display with a special card slot
  • the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
  • the all-in-one virtual reality device has a processor for performing calculations related to virtual functions, so it has independent virtual reality input and output functions. It does not need to be connected to a PC or mobile terminal, and has a high degree of freedom in use.
  • Each embodiment of the present disclosure provides a method for adjusting the viewing image in a virtual environment.
  • the method can be executed by a terminal or a server, or can be executed jointly by a terminal and a server.
  • the embodiments of the present disclosure use the method of adjusting the viewing image in a virtual environment The method is explained by taking the execution of the terminal (virtual reality device) as an example.
  • Figure 1 is a schematic flowchart of a viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure.
  • Figures 2 to 13 are schematic diagrams of relevant application scenarios provided by an embodiment of the present disclosure, wherein, The blank background in Figure 2 can be a three-dimensional virtual space layer.
  • the method includes:
  • Step 110 Display a virtual environment with a ray cursor and a video viewing area presented in the virtual environment.
  • the ray cursor points in the direction of the video viewing area and forms a first cursor with the video viewing area. angle.
  • a virtual environment 10 is displayed.
  • a ray cursor 11 and a video viewing area 12 are presented in the virtual environment 10 .
  • the ray cursor 11 points in the direction of the video viewing area 12 and is connected to the video viewing area 12 .
  • forming a first included angle ⁇ For example, in order to enrich the presentation of the virtual environment 10 , a virtual handle 13 can be presented at the starting position 111 of the ray cursor 11 , and there is a beam emitting in the direction of the video viewing area 12 directly in front of the virtual handle 13 .
  • a first included angle ⁇ is formed between the line cursor 11, the ray cursor 11 and the video viewing area 12.
  • the first included angle ⁇ is the included angle formed by the ray cursor 11 and the plane where the video viewing area 12 is located.
  • the viewing area is the video panel (hereinafter referred to as the virtual screen) in the background; in VR180 and VR360, the viewing area is the visual focus area, which is the corresponding viewfinder when dragging subsequently.
  • the ray cursor 11 and the video viewing area 12 presented in the virtual environment 10 can be displayed normally or hidden.
  • the viewing mode of the virtual environment 10 is a two-dimensional video viewing mode
  • the video viewing area 12 can be a virtual screen.
  • the virtual screen Before the virtual screen is dragged and during the dragging process, the virtual screen can be in The display state is used to simulate that the object (the object can be a user) is in a movie viewing scene such as a cinema; the ray cursor 11 can be in the display state before the virtual screen is dragged, and the ray cursor 11 can be displayed while the virtual screen is being dragged.
  • the virtual handle 13 In a hidden state; for example, if a virtual handle 13 is also present at the starting position 111 of the ray cursor 11, the virtual handle 13 can be in a displayed state before the virtual screen is dragged. can be hidden.
  • the video viewing area 12 can be a viewfinder with a preset ratio (equivalent to the user's visual focus area in the virtual environment).
  • the viewfinder before the viewfinder is dragged, the viewfinder can be hidden to simulate that the user is in a panoramic video scene, and the entire virtual environment displays panoramic video; while the viewfinder is being dragged, the viewfinder It can be in a display state; the ray cursor 11 can be in a display state before the viewfinder frame is dragged, and the ray cursor 11 can be in a hidden state while the viewfinder frame is being dragged; for example, if the starting position 111 of the ray cursor 11 is still If the virtual handle 13 is present, the virtual handle 13 can be in a displayed state before the viewfinder frame is dragged, and the virtual handle 13 can be in a hidden state while the viewfinder frame is being dragged.
  • Step 120 Drag the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle.
  • the drag control information generated by the keys of the interactive device can be manipulated based on the object.
  • a virtual environment can be displayed on the display of a virtual reality device.
  • the virtual reality device can be connected to an external interactive device, and the virtual environment can be driven by controlling the buttons of the interactive device.
  • the ray cursor in the environment can be controlled to control the drag operation of the ray cursor driver on the video viewing area in response to the drag control information.
  • the interactive device may include a handle, a digital glove, a specific interactive device, etc.
  • drag control information can be generated based on an object's bare hand gesture.
  • the drag operation of the ray cursor driver on the video viewing area can be controlled in response to the drag control information.
  • dragging the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle includes:
  • the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
  • a spherical space 20 can be provided, and the spherical space 20 serves as a viewing container.
  • the movie can be viewed in the viewing container of the spherical space 20 .
  • the drag control information taking the starting position 111 of the ray cursor 11 as the origin A of the spherical space 20 and fixing the first angle ⁇ between the ray cursor 11 and the plane where the video viewing area 12 is located, along the spherical surface
  • the spherical surface of space 20 drags the video viewing area 12.
  • a cubic space can also be set up as a viewing container.
  • the viewing can be performed in the viewing container of the cubic space.
  • the initial position of the ray cursor is taken as the center point of the cube space, and the first angle between the ray cursor and the plane where the video viewing area is located is fixed, and the video viewing area is dragged along the surface of the cube space. shadow area.
  • the cursor focus of the control ray cursor is located in the draggable video viewing area, and along the x direction and/or y axis direction, human-computer interaction can be performed in the cubic space to realize dragging of the viewing screen.
  • the viewing mode of the virtual environment is a two-dimensional video viewing mode
  • the video viewing area is a virtual screen
  • the virtual screen is in a display state before responding to the drag control information
  • dragging the video viewing area along the spherical surface of the spherical space includes: dragging the virtual screen along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction.
  • the viewing mode of the virtual environment is a two-dimensional (2D) video viewing mode
  • the video viewing area is a virtual screen
  • the virtual screen before responding to the drag control information, the virtual screen is in a display state.
  • the 2D video viewing mode can refer to the virtual environment 10 shown in Figure 2 or Figure 4.
  • the virtual environment 10 is presented with a ray cursor 11 and a video viewing area 12 (ie, a virtual screen 121), where the ray cursor 11 points to direction of the video viewing area 12 and forming a first included angle ⁇ with the video viewing area 12 .
  • a virtual handle 13 can be presented at the starting position 111 of the ray cursor 11 , and directly in front of the virtual handle 13 there is a ray emitting in the direction of the video viewing area 12 (ie, the virtual screen 121 ).
  • a first included angle ⁇ is formed between the cursor 11, the ray cursor 11 and the video viewing area 12 (ie, the virtual screen 121).
  • the drag trigger condition includes that the ray cursor 11 points to the video viewing area 12 (ie, the virtual screen 121 ), and the cursor focus 112 of the ray cursor 11 is in the video viewing area 12 (ie, the virtual screen 121 ), the video viewing area 12 (i.e., the virtual screen 121) can be understood as the video hot zone range, and the object generates drag control information by long-pressing the preset button in the interactive device (such as a handle), and the virtual reality device obtains
  • the displacement of the ray cursor 11 in the x/y axis direction can be controlled to be greater than or equal to xdp in response to the drag control information to drag the video viewing area 12 (ie, the virtual screen 121).
  • x in xdp is a dynamic value.
  • the size of the rebound can be set according to the strength of the user's drag, and different values can be assigned in different scenarios.
  • the preset keys may include a Grip key, a Trigger key, an A/X key (the same function as Trigger), etc.
  • an interaction avoidance condition may also be set.
  • the interaction avoidance condition includes any of the following: dragging occurs when the "settings panel” pops up; dragging occurs when the "settings panel” disappears.
  • one of the above two is selected for interaction avoidance, such as setting the dragging to occur when the "settings panel” pops up, or setting the dragging to occur when the "settings panel” disappears.
  • the interaction avoidance conditions can also include: drag and drop When triggered, "video viewing area drag and drop” will be executed first, and "web page interaction" will not be executed.
  • the drag object is the video viewing area 12 (ie, the virtual screen 121 ), and the dragged video viewing area 12 (ie, the virtual screen 121 ), the video picture displayed in the video viewing area 12 (ie, the virtual screen 121) is a 2D full-screen video.
  • the first angle ⁇ between the ray cursor 11 and the video viewing area 12 (ie, the virtual screen 121 ) is fixed, and the video viewing area 12 (ie, the virtual screen 121 ) is dragged by moving the ray cursor 11 .
  • the video viewing area 12 ie, the virtual screen 121
  • the video viewing area 12 is dragged along the spherical surface of the spherical space 20 in at least one direction of the x-axis direction and the y-axis direction. ), that is, the x-y spherical full-degree-of-freedom mobile video viewing area 12 (ie, the virtual screen 121).
  • the starting position 111 of the ray cursor 11 is the origin A of the spherical space 20
  • the ray cursor 11 is the radius of the spherical space 20
  • the first intersection between the ray cursor 11 and the video viewing area 12 ie, the virtual screen 121
  • Angle ⁇ is the position of the spherical moving landing point B (x-axis coordinate and y-axis coordinate), that is, the content seen by the subject on the display screen of the virtual reality device is always the position of point B tangent to the spherical surface of the spherical space 20,
  • the first included angle ⁇ remains unchanged during dragging.
  • the line segment AB in the figure represents the ray cursor 11 shot from the virtual handle 13 to the video viewing area 12 (ie, the virtual screen 121).
  • the RotationZ (lateral rotation) value of the video viewing area is always 0.
  • the center point of the virtual screen 121 moves along the spherical surface of the spherical space 20 in at least one direction of the x-axis direction and the y-axis direction.
  • the method further includes: if the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, controlling the virtual screen to surround the spherical surface of the spherical space.
  • the center of the virtual screen is flipped 180 degrees.
  • the top margin of the virtual screen the bottom margin of the virtual screen (drag to the top/bottom)
  • the bottom margin of the virtual screen the same time Responds to dragging and performs a 180° flip around the center point of the virtual screen.
  • the method further includes: if the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and the virtual screen and the virtual ground encounter a mold-crossing situation. , then the virtual ground is hidden.
  • the disappearance processing condition can be that when the virtual ground 14 is penetrated by the virtual screen 121, the virtual ground 14 is controlled to gradually disappear based on the dragging speed.
  • the virtual ground 14 is completely hidden.
  • the method further includes:
  • the frame of the virtual screen is controlled to return to normal display.
  • the border 1211 of the virtual screen 121 is highlighted.
  • the border of the virtual screen returns to the nominal state.
  • the handle connected to the virtual reality device will vibrate; when the drag ends, the handle will stop vibrating.
  • a reset field of view instruction can be generated by long-pressing the "Home" key to reset the field of view in response to the reset field of view instruction and reset the position of the virtual screen to a default position in the field of view.
  • the video viewing area is a viewfinder of a preset proportion.
  • the drag control information takes the initial position of the ray cursor as the origin of the spherical space, fixes the first included angle, and drags the video viewing area along the spherical surface of the spherical space, including:
  • control the viewfinder frame In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment
  • the video viewing area is a viewing frame with a preset ratio.
  • the viewfinder frame with a preset ratio may be a 16:9 viewfinder frame.
  • the viewfinder frame may be an area of 1280*720.
  • the drag trigger conditions include that the ray cursor points to the video viewing area (i.e., the viewfinder frame), and the cursor focus of the ray cursor is in the video viewing area (i.e., the viewfinder frame)
  • the video viewing area i.e., the viewfinder
  • the video hot zone range can also be understood as the visual focus area in the immersive experience
  • the object is in the interactive device (such as a handle) by long pressing
  • the preset keys generate drag control information, and when the virtual reality device obtains the drag control information, it can control the displacement of the ray cursor in the x/y axis direction to be greater than or equal to xdp in response to the drag control information to drag the video Viewing area (i.e. viewing frame).
  • the preset keys may include a Grip key, a Trigger key, an A/X key (the same function as Trigger), etc.
  • interaction avoidance conditions include any of the following: dragging when the "Settings Panel” pops up; dragging when the “Settings Panel” pops up; dragging when the “Settings Panel” and “Immersion Bar” disappear; “Immersion Bar” is displayed Dragging occurs when the “immersion bar” disappears; dragging occurs when the “immersion bar” disappears. Among them, choose one of the above four to avoid interaction.
  • the video viewing area 12 in FIG. 3 may be a viewfinder 122.
  • the dragged object is the video viewing area 12 (i.e., the viewfinder 122)
  • the video screen displayed in the video viewing area 12 (i.e., the viewfinder 122) following the dragged video viewing area 12 (i.e., the viewfinder 122) is the video map corresponding to the current drag position in the panoramic video.
  • the fixed ray cursor 11 and the first clip of the video viewing area 12 i.e., the viewfinder 122 are fixed.
  • Angle ⁇ by moving the ray cursor 11 to drag the video viewing area 12 (ie, the viewfinder 122).
  • the starting position 111 of the ray cursor 11 is the origin A of the spherical space 20, and the ray cursor 11 is the spherical space.
  • fix the first angle ⁇ between the ray cursor 11 and the video viewing area 12 i.e., the viewing frame 122
  • move horizontally along the spherical surface of the spherical space 20 and drag the video viewing area 12 i.e., the viewing frame 122).
  • the method further includes:
  • the area in the virtual environment 10 except the viewfinder 122 is controlled to display the mask layer 17, for example , the mask layer 17 can be a black mask layer, or it can be a mask layer of other colors.
  • the mask layer 17 can be a black mask layer, or it can be a mask layer of other colors.
  • the viewfinder frame 122 and the mask layer 17 are hidden, and during the immersive experience, the entire immersive video image of the virtual environment 10 is visible.
  • the handle connected to the virtual reality device will vibrate; when the drag ends, the handle will stop vibrating.
  • a reset field of view instruction may be generated by long-pressing the "Home" key to reset the field of view in response to the reset field of view instruction and reset the position of the viewfinder frame 122 to a default position in the field of view.
  • the method further includes:
  • the method before responding to the reset visual field control instruction, the method further includes:
  • Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
  • the reset field of view prompt information can be displayed in the viewfinder in the form of a toast message prompt box 1221.
  • the content of the reset field of view prompt message is "Long press the Home button on one handle to reset the field of view” or "Long press the Home button on one handle to reset the field of view” or "Long press the Home button on one handle to reset the field of view”.
  • the subject long presses the Home key or ⁇ key to trigger the reset vision control instruction through the content of the reset vision prompt information, so that the virtual reality device performs a vision reset operation in response to the reset vision control instruction, such as controlling the left side of a 180-degree panoramic video. Move the border to the left border of the viewfinder, or control the right border of the 180-degree panoramic video to move to the right border of the viewfinder.
  • the toast message prompt box 1221 is used to display a visual field reset prompt message in the viewfinder 122.
  • the toast message prompt box 1221 does not have any control buttons and will not gain focus and will automatically disappear after a period of time.
  • the video boundary bounces back to the viewfinder boundary, that is, the left boundary of the video bounces back to the left boundary of the viewfinder, or the right boundary of the video bounces back to the right boundary of the viewfinder. If the boundary of the viewfinder is not dragged beyond the boundary of the video, the toast message prompt box will not appear.
  • the preset distance value is xdp
  • x in xdp is a dynamic value.
  • the size of the rebound can be set according to the user's drag strength, and different values can be assigned in different scenarios.
  • the method further includes:
  • the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
  • the interactive device takes a handle as an example.
  • a vibration prompt information is sent to the handle.
  • the vibration prompt information is used to indicate that the handle vibrates.
  • the handle generates an instantaneous vibration in response to the vibration prompt information to prompt the object for the drag operation. is triggered.
  • the vibration duration is x seconds, say 3 seconds.
  • the method further includes:
  • the drag control information is drag control information generated based on the object's bare hand gesture, then in response to the drag control information, voice prompt information is issued, and the voice prompt information is used to prompt that the drag operation is triggered.
  • a voice prompt message can be issued, and the voice prompt message is used to prompt that the drag-and-drop operation is triggered.
  • the method further includes: during the dragging process, hiding the ray cursor and displaying the cursor focus of the ray cursor located on the video viewing area.
  • the ray cursor 11 can be hidden and only the cursor focus 112 is displayed.
  • the play bar 15 can also be hidden; the non-normal (standard) state can also be displayed in a full-screen video; the video play/pause state displayed on the virtual screen 121 remains unchanged.
  • the virtual environment is displayed as shown in Figure 7, and the video picture displayed in the video viewing area 12 (ie, the virtual screen 121) is a 2D full-screen video.
  • the ray cursor 11 can be hidden, and only the cursor focus 112 is displayed.
  • the video play/pause state displayed in the viewfinder 122 remains unchanged.
  • the virtual environment is displayed as shown in FIG. 9, and the video screen displayed in the video viewing area 12 (i.e., the viewfinder 122) is the video map corresponding to the dragging position.
  • Step 130 Determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
  • the video viewing area is determined based on the current drag position of the video viewing area in the virtual environment.
  • the video screen displayed in the area includes:
  • the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen
  • the video picture displayed by the virtual screen at the current dragging position and the video picture displayed by the virtual screen at the dragging starting position are both two-dimensional videos played in full screen.
  • the virtual reality device responds to the drag end command and cancels the drag.
  • the virtual screen stays at the current Drag position.
  • the "UI display” of the entire full-screen mode of the virtual screen keeps the relative positions of the "full-screen bar (bar)", “full-screen video”, and "setting panel” unchanged.
  • the full-screen video is a two-dimensional video played in full screen.
  • the video images displayed in the video viewing area in the 2D video viewing mode are all full-screen videos.
  • the playback or pause state of the video remains unchanged. .
  • the display shows the default state.
  • the default state represents the initial state of the display system.
  • the displayed video image will be adjusted to a tilted 45° image (the video image after dragging); but the next time the user wears the virtual reality device, the displayed video image will be The video screen is a normal 90° screen (default state).
  • the position drag information of the virtual screen may not be recorded, but the zoom information of the ray cursor may be recorded.
  • the scaling information includes the scaling size of the ray cursor, through which the virtual screen can be zoomed in or out based on the screen center point.
  • the determination is based on the current drag position of the video viewing area in the virtual environment.
  • the video images displayed in the video viewing area include:
  • the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
  • a drag end command is generated, so that the virtual reality device responds to the drag end command and cancels the drag.
  • the viewfinder stays at the current dragging position, and the viewfinder Disappears in the way of Alpha change, and the video picture displayed in the viewfinder remains at the current dragging position for playback.
  • the video picture displayed in the viewfinder is the video map corresponding to the current dragging position in the panoramic video, in which the viewfinder is at the current dragging position.
  • the video screen displayed is different from the video screen displayed at the drag starting position of the viewfinder.
  • the VR180° immersive viewing mode it is a 180° panorama, which is equivalent to a hemisphere.
  • the subject wears the virtual reality device, he or she can watch the panoramic video in a 180° range.
  • the video content displayed in the viewfinder is dragged out of the 180° displayable range according to the movement of the viewfinder, there will be a black mask or Alpha gradient to blend with the viewing screen.
  • the viewing screen is in color.
  • the video content is dragged out 180°, there will be a transition state from the color viewing screen to a pure black scene. This transition state is the Alpha change.
  • the drag position will not be recorded.
  • the display shows the default state.
  • Embodiments of the present disclosure display a virtual environment, and a ray cursor and a video viewing area are presented in the virtual environment, where the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area; in response to dragging Drag control information, and drag the video viewing area based on the initial position of the ray cursor and the first angle; determine the video picture displayed in the video viewing area based on the current dragging position of the video viewing area in the virtual environment.
  • the disclosed embodiment designs a drag and drop method for three-dimensional space of 2D video, VR180 video, and VR360 video, which allows users to experience the charm of VR space and different viewing angles in the video field, and improves the viewing experience in virtual reality space. immersive experience.
  • the embodiment of the present disclosure also provides a device for adjusting the viewing image in the virtual environment.
  • FIG. 14 is a schematic structural diagram of a viewing image adjustment device in a virtual environment provided by an embodiment of the present disclosure.
  • the viewing screen adjustment device 200 in the virtual environment may include:
  • the display unit 210 is used to display a virtual environment, wherein the virtual environment presents a ray A cursor and a video viewing area, wherein the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area;
  • the control unit 220 is configured to drag the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle;
  • the determining unit 230 is configured to determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
  • control unit 220 is specifically used to:
  • the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
  • the viewing mode of the virtual environment is a two-dimensional video viewing mode
  • the video viewing area is a virtual screen
  • the virtual screen is in a display state before responding to the drag control information.
  • control unit 220 drags the video viewing area along the spherical surface of the spherical space, it is specifically configured to: drag the video viewing area along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction. Drag the virtual screen.
  • control unit 220 is also configured to: if the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, control the The virtual screen is flipped 180 degrees around the center of the virtual screen.
  • control unit 220 is also configured to: if the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and the virtual screen and the virtual ground If a mold-breaking situation occurs, the virtual ground will be hidden.
  • control unit 220 is also used to:
  • the border of the virtual screen is controlled to return to normal display.
  • the determining unit 230 is specifically used to:
  • the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen is displayed at the current drag position.
  • the video picture and the video picture displayed on the virtual screen at the drag starting position are both two-dimensional videos played in full screen.
  • the video viewing area is a viewfinder of a preset proportion. In response to the drag control information Previously, the viewfinder frame was hidden;
  • the control unit 220 is used for:
  • control the viewfinder frame In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment
  • control unit 220 is further configured to:
  • control unit 220 is also used to:
  • control unit 220 before responding to the reset visual field control instruction, the control unit 220 is also configured to:
  • Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
  • the determining unit 230 is specifically used to:
  • the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
  • control unit 220 is further configured to hide the ray cursor and display the cursor focus of the ray cursor located on the video viewing area during the dragging process.
  • control unit 220 is further configured to:
  • the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
  • control unit 220 is also configured to: if the drag control information is drag control information generated based on the object's bare hand gesture, when responding to the drag control information, issue a voice prompt Information, the voice prompt information is used to prompt that the drag operation is triggered.
  • Each unit in the above-mentioned viewing image adjustment device 200 in the virtual environment can be implemented in whole or in part by software, hardware, and combinations thereof.
  • Each of the above-mentioned units may be embedded in or independent of the processor in the virtual reality device in the form of hardware, or may be stored in the memory of the virtual reality device in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned units.
  • the viewing image adjustment device 200 in the virtual environment can be integrated in a terminal or server that has a storage device and a processor and has computing capabilities, or the viewing image adjustment device 200 in the virtual environment is the terminal or server.
  • the present disclosure also provides a virtual reality device, including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor executes the computer program, it implements the steps in the above method embodiments.
  • FIG 15 is a schematic structural diagram of a virtual reality device provided by an embodiment of the present disclosure.
  • the virtual reality device 300 can usually be provided in the form of glasses, a helmet-mounted display (HMD), or a contact lens. It is used to realize visual perception and other forms of perception.
  • the form of virtual reality equipment is not limited to this, and can be further miniaturized or enlarged as needed.
  • the virtual reality device 300 may include but is not limited to the following components:
  • Detection module 301 Use various sensors to detect the user's operation commands and act on the virtual environment, such as continuously updating the image displayed on the display screen following the user's line of sight, to achieve user interaction with the virtual and scene, for example, continuously updating the real content based on the detected rotation direction of the user's head.
  • Feedback module 302 receives data from sensors and provides real-time feedback to the user; wherein, the feedback module 302 may be used to display a graphical user interface, such as displaying a virtual environment on the graphical user interface.
  • the feedback module 302 may include a display screen or the like.
  • Sensor 303 On the one hand, it accepts operation commands from the user and applies them to the virtual environment; on the other hand, it provides the results of the operation to the user in the form of various feedbacks.
  • Control module 304 Controls sensors and various input/output devices, including obtaining user data (such as actions, voice) and output sensing data, such as images, vibrations, temperature and sound, etc., to the user, virtual environment and real world Have an effect.
  • user data such as actions, voice
  • output sensing data such as images, vibrations, temperature and sound, etc.
  • Modeling module 305 Constructs a three-dimensional model of the virtual environment, which can also include various feedback mechanisms such as sound and touch in the three-dimensional model.
  • a virtual environment can be constructed through the modeling module 305, and the virtual environment can be displayed through the feedback module 302.
  • a ray cursor and a video viewing area are presented in the virtual environment, wherein the ray The cursor points in the direction of the video viewing area and forms a first angle with the video viewing area; then the control module 304 responds to the drag control information and based on the initial position of the ray cursor and the At the first included angle, drag the video viewing area; then the control module 304 determines the video picture displayed in the video viewing area based on the current dragging position of the video viewing area in the virtual environment.
  • the virtual reality device 300 also includes a processor 310 with one or more processing cores. Or the memory 320 of more than one computer-readable storage medium and the computer program stored on the memory 320 and executable on the processor. Among them, the processor 310 is electrically connected to the memory 320.
  • the structure of the virtual reality device shown in the figures does not constitute a limitation on the virtual reality device, and may include more or fewer components than shown in the figures, or combine certain components, or arrange different components. .
  • the processor 310 is the control center of the virtual reality device 300. It uses various interfaces and lines to connect various parts of the entire virtual reality device 300, by running or loading software programs and/or modules stored in the memory 320, and by calling the software programs and/or modules stored in the memory 320.
  • the data in 320 performs various functions of the virtual reality device 300 and processes the data, thereby performing virtual reality processing.
  • the actual device 300 performs overall monitoring.
  • the processor 310 in the virtual reality device 300 will follow the following steps to load instructions corresponding to the processes of one or more application programs into the memory 320, and the processor 310 will run the instructions stored in application in memory 320 to implement various functions:
  • processor 310 may include detection module 301, control module 304, and modeling module 305.
  • the virtual reality device 300 further includes: a radio frequency circuit 306, an audio circuit 307, and a power supply 308.
  • the processor 310 is electrically connected to the memory 320, the feedback module 302, the sensor 303, the radio frequency circuit 306, the audio circuit 307, and the power supply 308, respectively.
  • the structure of the virtual reality device shown in FIG15 or FIG16 does not constitute a limitation on the virtual reality device, and may include more or less components than shown in the figure, or combine certain components, or arrange the components differently.
  • the radio frequency circuit 306 can be used to send and receive radio frequency signals to establish wireless communication with network equipment or other virtual reality equipment through wireless communication, and to send and receive signals with network equipment or other virtual reality equipment.
  • the audio circuit 307 can be used to provide an audio interface between the user and the virtual reality device through speakers and microphones.
  • the audio circuit 307 can transmit the electrical signal converted from the received audio data to the speaker, which converts it into a sound signal and outputs it; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received and converted by the audio circuit 307
  • the audio data is processed by the audio data output processor 301 and then sent to, for example, another virtual reality device via the radio frequency circuit 306, or the audio data is output to the memory for further processing.
  • Audio circuit 307 may also include an earplug jack to provide peripheral headphones with Communication with virtual reality devices.
  • the power supply 308 is used to power various components of the virtual reality device 300 .
  • the virtual reality device 300 may further include a camera, a wireless fidelity module, a Bluetooth module, an input module, etc., which will not be described in detail here.
  • the present disclosure also provides a computer-readable storage medium for storing a computer program.
  • the computer-readable storage medium can be applied to a virtual reality device or server, and the computer program causes the virtual reality device or server to execute the corresponding process in the viewing picture adjustment method in a virtual environment in the embodiment of the present disclosure. For simplicity, in This will not be described again.
  • the present disclosure also provides a computer program product including a computer program stored in a computer-readable storage medium.
  • the processor of the virtual reality device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the virtual reality device executes the corresponding process in the viewing picture adjustment method in the virtual environment in the embodiment of the present disclosure,
  • a computer program product including a computer program stored in a computer-readable storage medium.
  • the present disclosure also provides a computer program, the computer program includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the virtual reality device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the virtual reality device executes the corresponding process in the viewing picture adjustment method in the virtual environment in the embodiment of the present disclosure, For the sake of brevity, no further details will be given here.
  • the processor in the embodiment of the present disclosure may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method embodiment can be completed through an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available processors.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly embodied as hardware translation.
  • the execution of the code processor is completed, or the execution is completed using a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • non-volatile memory may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
  • Erase programmable read-only memory Electrodeically EPROM, EEPROM
  • Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • enhanced SDRAM ESDRAM
  • Synchlink DRAM SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in the embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure can be embodied in essence or part of the technical solution in the form of a software product.
  • the computer software product is stored in a storage medium and includes a number of instructions to enable a virtual reality device ( It may be a personal computer or a server) that executes all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Abstract

Provided in the present disclosure are a method and apparatus for adjusting a viewing picture in a virtual environment, and a storage medium and a device. The method comprises: displaying a virtual environment, wherein a ray cursor and a video viewing area are presented in the virtual environment, and the ray cursor points to the direction of the video viewing area, and forms a first included angle with the video viewing area; in response to dragging control information, dragging the video viewing area on the basis of the initial position of the ray cursor and the first included angle; and on the basis of the current dragging position of the video viewing area in the virtual environment, determining a video picture, which is displayed in the video viewing area. In the present disclosure, dragging modes for three-dimensional spaces in a 2D video, a VR180 video and a VR360 video are designed, such that a user can experience the fascination of a virtual reality (VR) space and different viewing angles in the field of videos, thereby improving the immersive experience of viewing in the VR space.

Description

虚拟环境中的观影画面调整方法、装置、存储介质及设备Viewing picture adjustment method, device, storage medium and equipment in virtual environment
本申请要求于2022年9月20日递交的中国专利申请第202211146244.1号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。This application claims priority from Chinese Patent Application No. 202211146244.1 submitted on September 20, 2022. The disclosure of the above Chinese patent application is hereby cited in its entirety as part of this application.
技术领域Technical field
本公开实施例涉及一种虚拟环境中的观影画面调整方法、装置、存储介质及设备。Embodiments of the present disclosure relate to a method, device, storage medium and equipment for adjusting viewing images in a virtual environment.
背景技术Background technique
目前在2D视频、VR180视频、VR360视频等观影模式中,均是在固定的位置以固定的画框进行视频画面展示,并不能很好地体现出虚拟现实空间观影的沉浸式体验,用户体验较差。Currently, in 2D video, VR180 video, VR360 video and other viewing modes, the video screen is displayed in a fixed position and a fixed frame, which cannot well reflect the immersive experience of viewing in the virtual reality space. Users Poor experience.
发明内容Contents of the invention
本公开实施例提供一种虚拟环境中的观影画面调整方法、装置、存储介质及设备,设计了针对2D视频、VR180视频、VR360视频的三维空间的拖拽方式,可以让用户体验到VR空间的魅力和在视频领域不同的观影视角,提升了在虚拟现实空间观影的沉浸式体验。Embodiments of the present disclosure provide a method, device, storage medium, and equipment for adjusting viewing images in a virtual environment. A drag and drop method for three-dimensional space of 2D video, VR180 video, and VR360 video is designed to allow users to experience the VR space. The charm and different viewing perspectives in the video field enhance the immersive experience of watching movies in the virtual reality space.
一方面,本公开实施例提供一种虚拟环境中的观影画面调整方法,所述方法包括:On the one hand, embodiments of the present disclosure provide a method for adjusting a viewing image in a virtual environment. The method includes:
显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;Display a virtual environment, with a ray cursor and a video viewing area presented in the virtual environment, wherein the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area;
响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;In response to the drag control information and based on the initial position of the ray cursor and the first angle, drag the video viewing area;
基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。 Based on the current drag position of the video viewing area in the virtual environment, the video picture displayed in the video viewing area is determined.
在一些实施例中,所述响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区,包括:In some embodiments, dragging the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle includes:
响应于拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区。In response to the drag control information, the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
在一些实施例中,若所述虚拟环境的观影模式为二维视频观影模式,则所述视频观影区为虚拟屏幕,在响应于拖拽控制信息之前,所述虚拟屏幕处于显示状态;In some embodiments, if the viewing mode of the virtual environment is a two-dimensional video viewing mode, the video viewing area is a virtual screen, and the virtual screen is in a display state before responding to the drag control information. ;
所述沿所述球面空间的球面拖拽所述视频观影区,包括:The dragging of the video viewing area along the spherical surface of the spherical space includes:
在x轴方向与y轴方向中的至少一个方向上,沿所述球面空间的球面拖拽所述虚拟屏幕。Drag the virtual screen along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
若在y轴方向上沿所述球面空间的球面拖拽所述虚拟屏幕至所述y轴方向的顶部或底部,则控制所述虚拟屏幕围绕所述虚拟屏幕的中心做180度翻转。If the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, the virtual screen is controlled to flip 180 degrees around the center of the virtual screen.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
若沿所述球面空间的球面拖拽所述虚拟屏幕至所述虚拟环境中的虚拟地面,且所述虚拟屏幕与所述虚拟地面产生穿模情况,则隐藏所述虚拟地面。If the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and a mold-crossing situation occurs between the virtual screen and the virtual ground, the virtual ground is hidden.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在拖拽过程中,控制所述虚拟屏幕的边框高亮显示;During the dragging process, control the highlighted display of the border of the virtual screen;
在拖拽结束时,控制所述虚拟屏幕的边框恢复常态显示。When the dragging is completed, the frame of the virtual screen is controlled to return to normal display.
在一些实施例中,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:In some embodiments, determining the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment includes:
基于所述虚拟屏幕位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面为全屏播放的二维视频,其中,所述虚拟屏幕在所述当前拖拽位置显示的视频画面与所述虚拟屏幕在拖拽起始位置显示的视频画面均为全屏播放的二维视频。Based on the current drag position of the virtual screen in the virtual environment, it is determined that the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen is displayed at the current drag position. The video picture and the video picture displayed on the virtual screen at the drag starting position are both two-dimensional videos played in full screen.
在一些实施例中,若所述虚拟环境的观影模式为180度或360度的全景视频观影模式,则所述视频观影区为预设比例的取景框,在响 应于拖拽控制信息之前,所述取景框处于隐藏状态;In some embodiments, if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area is a viewing frame of a preset ratio. Before dragging the control information, the viewfinder frame is in a hidden state;
所述响应于拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区,包括:In response to the drag control information, taking the initial position of the ray cursor as the origin of the spherical space, fixing the first included angle, and dragging the video viewing area along the spherical surface of the spherical space includes:
响应于拖拽控制信息,控制所述取景框显示于所述虚拟环境中;In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment;
以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,在x轴方向上沿所述球面空间的球面拖拽所述取景框。Taking the initial position of the ray cursor as the origin of the spherical space, fixing the first included angle, and dragging the viewfinder frame along the spherical surface of the spherical space in the x-axis direction.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在拖拽过程中,控制所述虚拟环境中除所述取景框之外的区域显示蒙层;During the dragging process, control the area in the virtual environment except the viewfinder to display the mask layer;
在拖拽结束时,隐藏所述取景框与所述蒙层。When the dragging is completed, the viewfinder frame and the mask layer are hidden.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在所述取景框的左边界超出所述180度的全景视频的左边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的左边界移动至所述取景框的左边界;或者Stop dragging when the distance value of the left boundary of the viewfinder beyond the left boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The left border moves to the left border of the viewfinder frame; or
在所述取景框的右边界超出所述180度的全景视频的右边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的右边界移动至所述取景框的右边界。Stop dragging when the distance value of the right boundary of the viewfinder beyond the right boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The right border moves to the right border of the viewfinder frame.
在一些实施例中,在所述响应于重置视野控制指令之前,还包括:In some embodiments, before responding to the reset visual field control instruction, the method further includes:
在所述取景框中显示重置视野提示信息,所述重置视野提示信息用于提示对象输入所述重置视野控制指令。Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
在一些实施例中,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:In some embodiments, determining the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment includes:
基于所述取景框位于所述虚拟环境中的当前拖拽位置,确定所述取景框显示的视频画面为所述全景视频中所述当前拖拽位置对应的视频贴图,其中,所述取景框在所述当前拖拽位置显示的视频画面与所述取景框在拖拽起始位置显示的视频画面不同。Based on the current drag position of the viewfinder frame in the virtual environment, it is determined that the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
在一些实施例中,所述方法还包括:在拖拽过程中,隐藏所述射线光标,并显示位于所述视频观影区上的所述射线光标的光标焦点。In some embodiments, the method further includes: during the dragging process, hiding the ray cursor and displaying the cursor focus of the ray cursor located on the video viewing area.
在一些实施例中,所述方法还包括: In some embodiments, the method further includes:
若所述拖拽控制信息为基于对象操控交互设备的按键生成的拖拽控制信息,则在响应于拖拽控制信息时,向所述交互设备发送震动提示信息,所述震动提示信息用于指示所述交互设备发生震动,以提示拖拽操作被触发。If the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
若所述拖拽控制信息为基于对象的裸手手势生成的拖拽控制信息,则在响应于拖拽控制信息时,发出语音提示信息,所述语音提示信息用于提示拖拽操作被触发。If the drag control information is drag control information generated based on the object's bare hand gesture, then in response to the drag control information, voice prompt information is issued, and the voice prompt information is used to prompt that the drag operation is triggered.
另一方面,本公开实施例提供一种虚拟环境中的观影画面调整装置,所述装置包括:On the other hand, embodiments of the present disclosure provide a device for adjusting viewing images in a virtual environment. The device includes:
显示单元,用于显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;A display unit is used to display a virtual environment. A ray cursor and a video viewing area are presented in the virtual environment. The ray cursor points in the direction of the video viewing area and forms a space between the ray cursor and the video viewing area. The first included angle;
控制单元,用于响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;A control unit, configured to respond to the drag control information and drag the video viewing area based on the initial position of the ray cursor and the first angle;
确定单元,用于基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。A determining unit configured to determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
另一方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序适于处理器进行加载,以执行如上任一实施例所述的虚拟环境中的观影画面调整方法。On the other hand, embodiments of the present disclosure provide a computer-readable storage medium that stores a computer program, and the computer program is suitable for loading by a processor to execute the steps described in any of the above embodiments. A method for adjusting the viewing screen in a virtual environment.
另一方面,本公开实施例提供一种虚拟现实设备,所述虚拟现实设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行如上任一实施例所述的虚拟环境中的观影画面调整方法。On the other hand, embodiments of the present disclosure provide a virtual reality device. The virtual reality device includes a processor and a memory. A computer program is stored in the memory. The processor invokes the computer program stored in the memory. A program used to execute the viewing picture adjustment method in a virtual environment as described in any of the above embodiments.
另一方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上任一实施例所述的虚拟环境中的观影画面调整方法。On the other hand, embodiments of the present disclosure provide a computer program product, including a computer program. When the computer program is executed by a processor, the method for adjusting a viewing image in a virtual environment as described in any of the above embodiments is implemented.
附图说明 Description of the drawings
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present disclosure. For those skilled in the art, other drawings can also be obtained based on these drawings without exerting creative efforts.
图1为本公开实施例提供的虚拟环境中的观影画面调整方法的流程示意图;FIG1 is a schematic diagram of a flow chart of a method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure;
图2为本公开实施例提供的虚拟环境中的观影画面调整方法的第一应用场景示意图;Figure 2 is a schematic diagram of the first application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure;
图3为本公开实施例提供的虚拟环境中的观影画面调整方法的第二应用场景示意图;FIG3 is a schematic diagram of a second application scenario of the method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure;
图4为本公开实施例提供的虚拟环境中的观影画面调整方法的第三应用场景示意图;Figure 4 is a schematic diagram of the third application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure;
图5为本公开实施例提供的虚拟环境中的观影画面调整方法的第四应用场景示意图;Figure 5 is a schematic diagram of the fourth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图6为本公开实施例提供的虚拟环境中的观影画面调整方法的第五应用场景示意图;Figure 6 is a schematic diagram of the fifth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图7为本公开实施例提供的虚拟环境中的观影画面调整方法的第六应用场景示意图;Figure 7 is a schematic diagram of the sixth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图8为本公开实施例提供的虚拟环境中的观影画面调整方法的第七应用场景示意图;Figure 8 is a schematic diagram of the seventh application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图9为本公开实施例提供的虚拟环境中的观影画面调整方法的第八应用场景示意图;Figure 9 is a schematic diagram of the eighth application scenario of the method for adjusting the viewing picture in a virtual environment provided by an embodiment of the present disclosure;
图10为本公开实施例提供的虚拟环境中的观影画面调整方法的第九应用场景示意图;Figure 10 is a schematic diagram of the ninth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图11为本公开实施例提供的虚拟环境中的观影画面调整方法的第十应用场景示意图;Figure 11 is a schematic diagram of the tenth application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图12为本公开实施例提供的虚拟环境中的观影画面调整方法的第十一应用场景示意图;Figure 12 is a schematic diagram of an eleventh application scenario of the viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure;
图13为本公开实施例提供的虚拟环境中的观影画面调整方法的第十二应用场景示意图; FIG13 is a schematic diagram of a twelfth application scenario of the method for adjusting a movie viewing image in a virtual environment provided by an embodiment of the present disclosure;
图14为本公开实施例提供的虚拟环境中的观影画面调整装置的结构示意图;FIG14 is a schematic diagram of the structure of a device for adjusting a viewing image in a virtual environment provided by an embodiment of the present disclosure;
图15为本公开实施例提供的虚拟现实设备的第一结构示意图;以及Figure 15 is a first structural schematic diagram of a virtual reality device provided by an embodiment of the present disclosure; and
图16为本公开实施例提供的虚拟现实设备的第二结构示意图。Figure 16 is a second structural schematic diagram of a virtual reality device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only some of the embodiments of the present disclosure, rather than all of the embodiments. Based on the embodiments in this disclosure, all other embodiments obtained by those skilled in the art without making creative efforts fall within the scope of protection of this disclosure.
本公开实施例提供一种虚拟环境中的观影画面调整方法、装置、计算机可读存储介质、虚拟现实设备、服务器及计算机程序产品。具体地,本公开实施例的虚拟环境中的观影画面调整方法可以由虚拟现实设备或者由服务器执行。Embodiments of the present disclosure provide a method, device, computer-readable storage medium, virtual reality device, server, and computer program product for adjusting a viewing image in a virtual environment. Specifically, the viewing picture adjustment method in the virtual environment of the embodiment of the present disclosure can be executed by a virtual reality device or a server.
本公开实施例可以应用于扩展现实(Extended Reality,XR)、虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)、混合现实(Mixed Reality,MR)等各种应用场景。The disclosed embodiments can be applied to various application scenarios such as Extended Reality (XR), Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
首先,在对本公开实施例进行描述的过程中出现的部分名词或者术语作如下解释:First, some nouns or terms that appear in the description of the embodiments of the present disclosure are explained as follows:
扩展现实(Extended Reality,XR),是包括虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)及混合现实(Mixed Reality,MR)的概念,表示制成虚拟世界与现实世界相连接的环境,用户能够与该环境实时交互的技术。Extended Reality (XR) is a concept that includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR), which means that the virtual world is connected to the real world. environment, technology that enables users to interact with that environment in real time.
虚拟现实(Virtual Reality,VR),创建和体验虚拟世界的技术,计算生成一种虚拟环境,是一种多源信息(本文中提到的虚拟现实至少包括视觉感知,此外还可以包括听觉感知、触觉感知、运动感知,甚至还包括味觉感知、嗅觉感知等),实现虚拟环境的融合的、交互式的三维动态视景和实体行为的仿真,使用户沉浸到模拟的虚拟现实 环境中,实现在诸如地图、游戏、视频、教育、医疗、模拟、协同训练、销售、协助制造、维护和修复等多种虚拟环境的应用。Virtual Reality (VR), a technology for creating and experiencing virtual worlds, calculates and generates a virtual environment, which is a kind of multi-source information (the virtual reality mentioned in this article at least includes visual perception, and can also include auditory perception, Tactile perception, motion perception, and even taste perception, smell perception, etc.), realizing the integration of virtual environment, interactive three-dimensional dynamic vision and simulation of entity behavior, immersing users in simulated virtual reality In the environment, it can be used in various virtual environments such as maps, games, videos, education, medical care, simulation, collaborative training, sales, assisted manufacturing, maintenance and repair, etc.
增强现实(Augmented Reality,AR),一种在相机采集图像的过程中,实时地计算相机在现实世界(或称三维世界、真实世界)中的相机姿态参数,根据该相机姿态参数在相机采集的图像上添加虚拟元素的技术。虚拟元素包括但不限于:图像、视频和三维模型。AR技术的目标是在屏幕上把虚拟世界套接在现实世界上进行互动。Augmented Reality (AR) is a method in which the camera's posture parameters in the real world (or three-dimensional world, real world) are calculated in real time during the process of camera image collection. A technique for adding virtual elements to images. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to connect the virtual world to the real world on the screen for interaction.
混合现实(Mixed Reality,MR),将计算机创建的感官输入(例如,虚拟对象)与来自物理布景的感官输入或其表示集成的模拟布景,一些MR布景中,计算机创建的感官输入可以适应于来自物理布景的感官输入的变化。另外,用于呈现MR布景的一些电子系统可以监测相对于物理布景的取向和/或位置,以使虚拟对象能够与真实对象(即来自物理布景的物理元素或其表示)交互。例如,系统可监测运动,使得虚拟植物相对于物理建筑物看起来是静止的。Mixed Reality (MR) is a simulated set that integrates computer-created sensory input (e.g., virtual objects) with sensory input from the physical set or its representation. In some MR sets, the computer-created sensory input can be adapted to the sensory input from the physical set. Changes in sensory input to the physical set. Additionally, some electronic systems used to render the MR scenery may monitor orientation and/or position relative to the physical scenery to enable virtual objects to interact with real objects (i.e., physical elements from the physical scenery or representations thereof). For example, the system can monitor motion so that virtual plants appear stationary relative to physical buildings.
增强虚拟(Augmented Virtuality,AV):AV布景是指计算机创建布景或虚拟布景并入来自物理布景的至少一个感官输入的模拟布景。来自物理布景的一个或多个感官输入可为物理布景的至少一个特征的表示。例如,虚拟对象可呈现由一个或多个成像传感器捕获的物理元素的颜色。又如,虚拟对象可呈现出与物理布景中的实际天气条件相一致的特征,如经由天气相关的成像传感器和/或在线天气数据所识别的。在另一个示例中,增强现实森林可具有虚拟树木和结构,但动物可具有从对物理动物拍摄的图像精确再现的特征。Augmented Virtuality (AV): AV scenery refers to a computer-created scenery or virtual scenery that incorporates at least one sensory input from the physical scenery to simulate the scenery. The one or more sensory inputs from the physical setting may be a representation of at least one feature of the physical setting. For example, a virtual object may take on the color of a physical element captured by one or more imaging sensors. As another example, virtual objects may exhibit characteristics consistent with actual weather conditions in the physical scene, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest can have virtual trees and structures, but the animals can have features accurately recreated from images taken of physical animals.
虚拟视场,用户在虚拟现实设备中通过透镜所能够感知到的虚拟环境中的区域,使用虚拟视场的视场角(Field Of View,FOV)来表示所感知到区域。Virtual field of view, the area in the virtual environment that the user can perceive through the lens in the virtual reality device, uses the field of view (Field Of View, FOV) of the virtual field of view to represent the perceived area.
虚拟现实设备,实现虚拟现实效果的终端,通常可以提供为眼镜、头盔式显示器(Head Mount Display,HMD)、隐形眼镜的形态,以用于实现视觉感知和其他形式的感知,当然虚拟现实设备实现的形态不限于此,根据需要可以进一步小型化或大型化。Virtual reality equipment, a terminal that realizes virtual reality effects, can usually be provided in the form of glasses, helmet-mounted displays (HMD), and contact lenses to achieve visual perception and other forms of perception. Of course, virtual reality equipment realizes The form is not limited to this, and can be further miniaturized or enlarged as needed.
本公开实施例记载的虚拟现实设备可以包括但不限于如下几个 类型:The virtual reality devices recorded in the embodiments of the present disclosure may include but are not limited to the following: type:
电脑端虚拟现实(PCVR)设备,利用PC端进行虚拟现实功能的相关计算以及数据输出,外接的电脑端虚拟现实设备利用PC端输出的数据实现虚拟现实的效果。Computer-based virtual reality (PCVR) devices use the PC to perform relevant calculations and data output for virtual reality functions. External computer-based virtual reality devices use the data output by the PC to achieve virtual reality effects.
移动虚拟现实设备,支持以各种方式(如设置有专门的卡槽的头戴式显示器)设置移动终端(如智能手机),通过与移动终端有线或无线方式的连接,由移动终端进行虚拟现实功能的相关计算,并输出数据至移动虚拟现实设备,例如通过移动终端的APP观看虚拟现实视频。Mobile virtual reality equipment supports setting up a mobile terminal (such as a smartphone) in various ways (such as a head-mounted display with a special card slot), and through a wired or wireless connection with the mobile terminal, the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
一体机虚拟现实设备,具备用于进行虚拟功能的相关计算的处理器,因而具备独立的虚拟现实输入和输出的功能,不需要与PC端或移动终端连接,使用自由度高。The all-in-one virtual reality device has a processor for performing calculations related to virtual functions, so it has independent virtual reality input and output functions. It does not need to be connected to a PC or mobile terminal, and has a high degree of freedom in use.
以下分别进行详细说明。需说明的是,以下实施例的描述顺序不作为对实施例优先顺序的限定。Each is explained in detail below. It should be noted that the description order of the following embodiments is not used to limit the priority order of the embodiments.
本公开各实施例提供了一种虚拟环境中的观影画面调整方法,该方法可以由终端或服务器执行,也可以由终端和服务器共同执行;本公开实施例以虚拟环境中的观影画面调整方法由终端(虚拟现实设备)执行为例来进行说明。Each embodiment of the present disclosure provides a method for adjusting the viewing image in a virtual environment. The method can be executed by a terminal or a server, or can be executed jointly by a terminal and a server. The embodiments of the present disclosure use the method of adjusting the viewing image in a virtual environment The method is explained by taking the execution of the terminal (virtual reality device) as an example.
请参阅图1至图13,图1为本公开实施例提供的虚拟环境中的观影画面调整方法的流程示意图,图2至图13均为本公开实施例提供的相关应用场景示意图,其中,图2中的空白背景可以为三维虚拟空间层。该方法包括:Please refer to Figures 1 to 13. Figure 1 is a schematic flowchart of a viewing picture adjustment method in a virtual environment provided by an embodiment of the present disclosure. Figures 2 to 13 are schematic diagrams of relevant application scenarios provided by an embodiment of the present disclosure, wherein, The blank background in Figure 2 can be a three-dimensional virtual space layer. The method includes:
步骤110,显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角。Step 110: Display a virtual environment with a ray cursor and a video viewing area presented in the virtual environment. The ray cursor points in the direction of the video viewing area and forms a first cursor with the video viewing area. angle.
例如,如图2所示,显示虚拟环境10,虚拟环境10中呈现有射线光标11和视频观影区12,其中,射线光标11指向视频观影区12方向,且与视频观影区12之间形成第一夹角α。例如,为了丰富虚拟环境10的呈现画面,可以在射线光标11的起始位置111呈现有虚拟手柄13,虚拟手柄13的正前方具有向视频观影区12方向出射的射 线光标11,射线光标11与视频观影区12之间形成第一夹角α。其中,第一夹角α即为射线光标11与视频观影区12的所在平面形成的夹角。For example, as shown in FIG. 2 , a virtual environment 10 is displayed. A ray cursor 11 and a video viewing area 12 are presented in the virtual environment 10 . The ray cursor 11 points in the direction of the video viewing area 12 and is connected to the video viewing area 12 . forming a first included angle α. For example, in order to enrich the presentation of the virtual environment 10 , a virtual handle 13 can be presented at the starting position 111 of the ray cursor 11 , and there is a beam emitting in the direction of the video viewing area 12 directly in front of the virtual handle 13 . A first included angle α is formed between the line cursor 11, the ray cursor 11 and the video viewing area 12. The first included angle α is the included angle formed by the ray cursor 11 and the plane where the video viewing area 12 is located.
在2D中,观影区即为交底中的视频面板(下文称虚拟屏幕);VR180和VR360中,观影区即为视觉聚焦区域,即后续拖拽时对应的取景框。In 2D, the viewing area is the video panel (hereinafter referred to as the virtual screen) in the background; in VR180 and VR360, the viewing area is the visual focus area, which is the corresponding viewfinder when dragging subsequently.
其中,虚拟环境10中呈现的射线光标11和视频观影区12可以正常显示,也可以隐藏显示。比如,若虚拟环境10的观影模式为二维视频观影模式,则该视频观影区12可以为虚拟屏幕,在虚拟屏幕被拖拽前及被拖拽过程中,该虚拟屏幕均可处于显示状态,以模拟对象(该对象可以为用户)处于电影院等观影场景中;该射线光标11在虚拟屏幕被拖拽前可以处于显示状态,该射线光标11在虚拟屏幕被拖拽过程中可以处于隐藏状态;比如若在射线光标11的起始位置111还呈现有虚拟手柄13,则该虚拟手柄13在虚拟屏幕被拖拽前可以处于显示状态,该虚拟手柄13在虚拟屏幕被拖拽过程中可以处于隐藏状态。比如,若虚拟环境10的观影模式为180度或360度的全景视频观影模式,则该视频观影区12可以为预设比例的取景框(相当于用户在虚拟环境中的视觉聚焦区),在取景框被拖拽前,该取景框可以处于隐藏状态,以模拟用户处于全景观影场景中,整个虚拟环境的画面显示的是全景视频;在取景框被拖拽过程中,取景框可以处于显示状态;该射线光标11在取景框被拖拽前可以处于显示状态,该射线光标11在取景框被拖拽过程中可以处于隐藏状态;比如若在射线光标11的起始位置111还呈现有虚拟手柄13,则该虚拟手柄13在取景框被拖拽前可以处于显示状态,该虚拟手柄13在取景框被拖拽过程中可以处于隐藏状态。Among them, the ray cursor 11 and the video viewing area 12 presented in the virtual environment 10 can be displayed normally or hidden. For example, if the viewing mode of the virtual environment 10 is a two-dimensional video viewing mode, the video viewing area 12 can be a virtual screen. Before the virtual screen is dragged and during the dragging process, the virtual screen can be in The display state is used to simulate that the object (the object can be a user) is in a movie viewing scene such as a cinema; the ray cursor 11 can be in the display state before the virtual screen is dragged, and the ray cursor 11 can be displayed while the virtual screen is being dragged. In a hidden state; for example, if a virtual handle 13 is also present at the starting position 111 of the ray cursor 11, the virtual handle 13 can be in a displayed state before the virtual screen is dragged. can be hidden. For example, if the viewing mode of the virtual environment 10 is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area 12 can be a viewfinder with a preset ratio (equivalent to the user's visual focus area in the virtual environment). ), before the viewfinder is dragged, the viewfinder can be hidden to simulate that the user is in a panoramic video scene, and the entire virtual environment displays panoramic video; while the viewfinder is being dragged, the viewfinder It can be in a display state; the ray cursor 11 can be in a display state before the viewfinder frame is dragged, and the ray cursor 11 can be in a hidden state while the viewfinder frame is being dragged; for example, if the starting position 111 of the ray cursor 11 is still If the virtual handle 13 is present, the virtual handle 13 can be in a displayed state before the viewfinder frame is dragged, and the virtual handle 13 can be in a hidden state while the viewfinder frame is being dragged.
步骤120,响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区。Step 120: Drag the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle.
例如,可以基于对象操控交互设备的按键生成的拖拽控制信息。例如,可以在虚拟现实设备的显示屏上显示虚拟环境。该虚拟现实设备可以外接有交互设备,可以通过操控交互设备的按键来驱动虚拟环 境中的射线光标,以及可以响应于拖拽控制信息控制射线光标驱动对视频观影区的拖拽操作。例如,该交互设备可以包括手柄、数字手套、特定交互装置等。For example, the drag control information generated by the keys of the interactive device can be manipulated based on the object. For example, a virtual environment can be displayed on the display of a virtual reality device. The virtual reality device can be connected to an external interactive device, and the virtual environment can be driven by controlling the buttons of the interactive device. The ray cursor in the environment can be controlled to control the drag operation of the ray cursor driver on the video viewing area in response to the drag control information. For example, the interactive device may include a handle, a digital glove, a specific interactive device, etc.
例如,可以基于对象的裸手手势生成的拖拽控制信息。可以响应于拖拽控制信息控制射线光标驱动对视频观影区的拖拽操作。For example, drag control information can be generated based on an object's bare hand gesture. The drag operation of the ray cursor driver on the video viewing area can be controlled in response to the drag control information.
在一些实施例中,所述响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区,包括:In some embodiments, dragging the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle includes:
响应于拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区。In response to the drag control information, the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
例如,如图3所示,可以设置一个球面空间20,该球面空间20作为观影容器,在进入观影模式时,可以在该球面空间20的观影容器中进行观影。当响应于拖拽控制信息时,以射线光标11的起始位置111为球面空间20的原点A,并固定射线光标11与视频观影区12所在平面之间的第一夹角α,沿球面空间20的球面拖拽视频观影区12。以控制射线光标11的光标焦点112位于可拖动的视频观影区内,并沿x方向和/或y轴方向,可以在球形空间进行人机交互实现观影画面拖拽。For example, as shown in FIG. 3 , a spherical space 20 can be provided, and the spherical space 20 serves as a viewing container. When entering the viewing mode, the movie can be viewed in the viewing container of the spherical space 20 . When responding to the drag control information, taking the starting position 111 of the ray cursor 11 as the origin A of the spherical space 20 and fixing the first angle α between the ray cursor 11 and the plane where the video viewing area 12 is located, along the spherical surface The spherical surface of space 20 drags the video viewing area 12. By controlling the cursor focus 112 of the ray cursor 11 to be located in the draggable video viewing area, and along the x direction and/or the y axis direction, human-computer interaction can be performed in the spherical space to realize dragging of the viewing screen.
示例性地,也可以设置一个立方体空间,该立方体空间作为观影容器,在进入观影模式时,可以在该立方体空间的观影容器中进行观影。当响应于拖拽控制信息时,以射线光标的初始位置为立方体空间的中心点,并固定射线光标与视频观影区所在平面之间的第一夹角,沿立方体空间的表面拖拽视频观影区。以控制射线光标的光标焦点位于可拖动的视频观影区内,并沿x方向和/或y轴方向,可以在立方体空间进行人机交互实现观影画面拖拽。For example, a cubic space can also be set up as a viewing container. When entering the viewing mode, the viewing can be performed in the viewing container of the cubic space. When responding to the drag control information, the initial position of the ray cursor is taken as the center point of the cube space, and the first angle between the ray cursor and the plane where the video viewing area is located is fixed, and the video viewing area is dragged along the surface of the cube space. shadow area. The cursor focus of the control ray cursor is located in the draggable video viewing area, and along the x direction and/or y axis direction, human-computer interaction can be performed in the cubic space to realize dragging of the viewing screen.
在一些实施例中,若所述虚拟环境的观影模式为二维视频观影模式,则所述视频观影区为虚拟屏幕,在响应于拖拽控制信息之前,所述虚拟屏幕处于显示状态;所述沿所述球面空间的球面拖拽所述视频观影区,包括:在x轴方向与y轴方向中的至少一个方向上,沿所述球面空间的球面拖拽所述虚拟屏幕。 In some embodiments, if the viewing mode of the virtual environment is a two-dimensional video viewing mode, the video viewing area is a virtual screen, and the virtual screen is in a display state before responding to the drag control information; dragging the video viewing area along the spherical surface of the spherical space includes: dragging the virtual screen along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction.
例如,若虚拟环境的观影模式为二维(2D)视频观影模式,则视频观影区为虚拟屏幕,在响应于拖拽控制信息之前,虚拟屏幕处于显示状态。例如,2D视频观影模式可参图2或图4所示的虚拟环境10,该虚拟环境10中呈现有射线光标11和视频观影区12(即虚拟屏幕121),其中,射线光标11指向视频观影区12方向,且与视频观影区12之间形成第一夹角α。例如,为了丰富虚拟环境10的呈现画面,可以在射线光标11的起始位置111呈现有虚拟手柄13,虚拟手柄13的正前方具有向视频观影区12(即虚拟屏幕121)方向出射的射线光标11,射线光标11与视频观影区12(即虚拟屏幕121)之间形成第一夹角α。For example, if the viewing mode of the virtual environment is a two-dimensional (2D) video viewing mode, the video viewing area is a virtual screen, and before responding to the drag control information, the virtual screen is in a display state. For example, the 2D video viewing mode can refer to the virtual environment 10 shown in Figure 2 or Figure 4. The virtual environment 10 is presented with a ray cursor 11 and a video viewing area 12 (ie, a virtual screen 121), where the ray cursor 11 points to direction of the video viewing area 12 and forming a first included angle α with the video viewing area 12 . For example, in order to enrich the presentation of the virtual environment 10 , a virtual handle 13 can be presented at the starting position 111 of the ray cursor 11 , and directly in front of the virtual handle 13 there is a ray emitting in the direction of the video viewing area 12 (ie, the virtual screen 121 ). A first included angle α is formed between the cursor 11, the ray cursor 11 and the video viewing area 12 (ie, the virtual screen 121).
例如,在2D视频观影模式中,拖拽触发条件包括射线光标11指向视频观影区12(即虚拟屏幕121),且射线光标11的光标焦点112在视频观影区12(即虚拟屏幕121)内,该视频观影区12(即虚拟屏幕121)可以理解为视频热区范围,并且对象通过长按交互设备(比如手柄)中的预设按键产生拖拽控制信息,以及虚拟现实设备获取到该拖拽控制信息时,可以响应于拖拽控制信息控制射线光标11在x/y轴方向的位移大于或等于xdp,以拖拽视频观影区12(即虚拟屏幕121)。For example, in the 2D video viewing mode, the drag trigger condition includes that the ray cursor 11 points to the video viewing area 12 (ie, the virtual screen 121 ), and the cursor focus 112 of the ray cursor 11 is in the video viewing area 12 (ie, the virtual screen 121 ), the video viewing area 12 (i.e., the virtual screen 121) can be understood as the video hot zone range, and the object generates drag control information by long-pressing the preset button in the interactive device (such as a handle), and the virtual reality device obtains When the drag control information is received, the displacement of the ray cursor 11 in the x/y axis direction can be controlled to be greater than or equal to xdp in response to the drag control information to drag the video viewing area 12 (ie, the virtual screen 121).
例如,xdp中的x是个动态值,可以根据用户拖拽的力度来设置回弹的大小,不同场景下可以赋不同的值。其中,dp为与密度无关的像素,是一种基于屏幕密度的抽象单位,在每英寸160点的显示器上,1dp=1px。For example, x in xdp is a dynamic value. The size of the rebound can be set according to the strength of the user's drag, and different values can be assigned in different scenarios. Among them, dp is a density-independent pixel, which is an abstract unit based on screen density. On a display with 160 dots per inch, 1dp=1px.
例如,预设按键可以包括Grip(抓握)键、Trigger(扳机)键、A/X键(功能同Trigger)等。For example, the preset keys may include a Grip key, a Trigger key, an A/X key (the same function as Trigger), etc.
例如,在响应于拖拽控制信息之前,除了上述拖拽触发条件,还可以进行设置交互规避条件。例如,交互规避条件包括以下任一种:“设置面板”弹出时产生拖拽;“设置面板”消失时产生拖拽。其中,以上二者选其一进行交互规避,比如设置为“设置面板”弹出时产生拖拽,或者设置为“设置面板”消失时产生拖拽。For example, before responding to the drag control information, in addition to the above-mentioned drag triggering conditions, an interaction avoidance condition may also be set. For example, the interaction avoidance condition includes any of the following: dragging occurs when the "settings panel" pops up; dragging occurs when the "settings panel" disappears. Among them, one of the above two is selected for interaction avoidance, such as setting the dragging to occur when the "settings panel" pops up, or setting the dragging to occur when the "settings panel" disappears.
例如,当存在网页交互冲突时,交互规避条件还可以包括:拖拽 触发时,优先执行“视频观影区拖拽”,不执行“网页交互”。For example, when there is a web page interaction conflict, the interaction avoidance conditions can also include: drag and drop When triggered, "video viewing area drag and drop" will be executed first, and "web page interaction" will not be executed.
例如,在2D视频观影模式中,当响应于拖拽控制信息时,拖拽对象为视频观影区12(即虚拟屏幕121),跟随被拖拽的视频观影区12(即虚拟屏幕121),在视频观影区12(即虚拟屏幕121)中显示的视频画面为2D全屏视频。在拖拽过程中,固定射线光标11和视频观影区12(即虚拟屏幕121)的第一夹角α,通过移动射线光标11对视频观影区12(即虚拟屏幕121)进行拖拽。For example, in the 2D video viewing mode, when responding to the drag control information, the drag object is the video viewing area 12 (ie, the virtual screen 121 ), and the dragged video viewing area 12 (ie, the virtual screen 121 ), the video picture displayed in the video viewing area 12 (ie, the virtual screen 121) is a 2D full-screen video. During the dragging process, the first angle α between the ray cursor 11 and the video viewing area 12 (ie, the virtual screen 121 ) is fixed, and the video viewing area 12 (ie, the virtual screen 121 ) is dragged by moving the ray cursor 11 .
如图3所示,在2D视频观影模式中,在x轴方向与y轴方向中的至少一个方向上,沿球面空间20的球面,呈球面拖拽视频观影区12(即虚拟屏幕121),即x-y球面全自由度移动视频观影区12(即虚拟屏幕121)。具体的,以射线光标11的起始位置111为球面空间20的原点A,射线光标11为球面空间20的半径,固定射线光标11与视频观影区12(即虚拟屏幕121)的第一夹角α,呈球面移动落点B的位置(x轴坐标与y轴坐标),即对象在虚拟现实设备的显示屏上看到的内容始终是与球面空间20的球面相切的B点位置,拖拽期间保证第一夹角α不变。例如,图中的线段AB表示从虚拟手柄13射向视频观影区12(即虚拟屏幕121)的射线光标11。As shown in FIG. 3 , in the 2D video viewing mode, the video viewing area 12 (ie, the virtual screen 121 ) is dragged along the spherical surface of the spherical space 20 in at least one direction of the x-axis direction and the y-axis direction. ), that is, the x-y spherical full-degree-of-freedom mobile video viewing area 12 (ie, the virtual screen 121). Specifically, the starting position 111 of the ray cursor 11 is the origin A of the spherical space 20 , the ray cursor 11 is the radius of the spherical space 20 , and the first intersection between the ray cursor 11 and the video viewing area 12 (ie, the virtual screen 121 ) is fixed. Angle α is the position of the spherical moving landing point B (x-axis coordinate and y-axis coordinate), that is, the content seen by the subject on the display screen of the virtual reality device is always the position of point B tangent to the spherical surface of the spherical space 20, The first included angle α remains unchanged during dragging. For example, the line segment AB in the figure represents the ray cursor 11 shot from the virtual handle 13 to the video viewing area 12 (ie, the virtual screen 121).
例如,在拖拽过程中,视频观影区的RotationZ(横向旋转)值始终为0。For example, during the dragging process, the RotationZ (lateral rotation) value of the video viewing area is always 0.
如图5所示,在拖拽时,虚拟屏幕121的中心点沿球面空间20的球面在x轴方向与y轴方向中的至少一个方向上进行移动。As shown in FIG. 5 , during dragging, the center point of the virtual screen 121 moves along the spherical surface of the spherical space 20 in at least one direction of the x-axis direction and the y-axis direction.
在一些实施例中,所述方法还包括:若在y轴方向上沿所述球面空间的球面拖拽所述虚拟屏幕至所述y轴方向的顶部或底部,则控制所述虚拟屏幕围绕所述虚拟屏幕的中心做180度翻转。In some embodiments, the method further includes: if the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, controlling the virtual screen to surround the spherical surface of the spherical space. The center of the virtual screen is flipped 180 degrees.
例如,在2D视频观影模式中,y轴方向达到顶部/底部后,控制虚拟屏幕进行180°翻转。例如,始终保证在y轴方向上,虚拟屏幕的上边距>=虚拟屏幕的下边距,当拖拽至虚拟屏幕的上边距=虚拟屏幕的下边距时(拖拽至最上/最下),同时响应拖拽并围绕虚拟屏幕的中心点做180°翻转。For example, in the 2D video viewing mode, after the y-axis direction reaches the top/bottom, the virtual screen is controlled to flip 180°. For example, always ensure that in the y-axis direction, the top margin of the virtual screen >= the bottom margin of the virtual screen. When dragging to the top margin of the virtual screen = the bottom margin of the virtual screen (drag to the top/bottom), at the same time Responds to dragging and performs a 180° flip around the center point of the virtual screen.
例如,如图6所示,在拖拽时,虚拟屏幕121的中心点沿球面空 间20的球面在y轴方向上进行移动,虚拟屏幕121的中心点与球面相切,在继续拖拽过程中,当拖拽虚拟屏幕121至y轴方向的顶部或底部时,控制虚拟屏幕121进行180°翻转,且若拖拽操作还在持续,则在翻转后继续响应拖拽的位移变化。For example, as shown in Figure 6, when dragging, the center point of the virtual screen 121 moves along the spherical space. The spherical surface of space 20 moves in the y-axis direction, and the center point of the virtual screen 121 is tangent to the spherical surface. During the dragging process, when the virtual screen 121 is dragged to the top or bottom of the y-axis direction, the virtual screen 121 is controlled. Perform a 180° flip, and if the drag operation continues, it will continue to respond to the drag's displacement changes after the flip.
在一些实施例中,所述方法还包括:若沿所述球面空间的球面拖拽所述虚拟屏幕至所述虚拟环境中的虚拟地面,且所述虚拟屏幕与所述虚拟地面产生穿模情况,则隐藏所述虚拟地面。In some embodiments, the method further includes: if the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and the virtual screen and the virtual ground encounter a mold-crossing situation. , then the virtual ground is hidden.
例如,如图2所示,在2D视频观影模式中,拖拽产生虚拟屏幕121与虚拟地面14穿模时,虚拟地面14消失,变成没有虚拟地面14的场景。其中,在虚拟地面14消失时,可以通过设置消失处理条件保证拖拽的平滑体验,比如消失处理条件可以为虚拟地面14被虚拟屏幕121穿模时,基于拖拽速度控制虚拟地面14逐渐消失,当虚拟屏幕121完全穿过虚拟地面14时,完全隐藏虚拟地面14。For example, as shown in FIG. 2 , in the 2D video viewing mode, when the virtual screen 121 is dragged to cross the mold with the virtual ground 14 , the virtual ground 14 disappears, and the scene becomes a scene without the virtual ground 14 . Among them, when the virtual ground 14 disappears, a smooth dragging experience can be ensured by setting disappearance processing conditions. For example, the disappearance processing condition can be that when the virtual ground 14 is penetrated by the virtual screen 121, the virtual ground 14 is controlled to gradually disappear based on the dragging speed. When the virtual screen 121 completely passes through the virtual ground 14, the virtual ground 14 is completely hidden.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在拖拽过程中,控制所述虚拟屏幕的边框高亮显示;During the dragging process, control the highlighted display of the border of the virtual screen;
在拖拽结束时,控制所述虚拟屏幕的边框恢复常态显示。When the dragging is completed, the frame of the virtual screen is controlled to return to normal display.
例如,在2D视频观影模式中,在拖拽过程中,如图7所示,虚拟屏幕121的边框1211高亮显示。例如,拖拽结束时,虚拟屏幕的边框恢复nomal态。例如,在拖拽过程中,与虚拟现实设备相连接的手柄会发生震动;拖拽结束时,手柄会停止震动。For example, in the 2D video viewing mode, during the dragging process, as shown in FIG. 7 , the border 1211 of the virtual screen 121 is highlighted. For example, when dragging ends, the border of the virtual screen returns to the nominal state. For example, during the dragging process, the handle connected to the virtual reality device will vibrate; when the drag ends, the handle will stop vibrating.
例如,可以通过长按"Home"健生成重置视野指令,以响应于重置视野指令进行重置视野,将虚拟屏幕的位置重置(reset)至视野中的默认位置。For example, a reset field of view instruction can be generated by long-pressing the "Home" key to reset the field of view in response to the reset field of view instruction and reset the position of the virtual screen to a default position in the field of view.
如图8所示,播控Bar(栏)15、设置面板16显示时,如果射线光标的光标焦点在虚拟屏幕121内、且在设置面板16所示区域外触发拖拽,该虚拟屏幕121依旧响应拖拽操作;拖拽时,播控Bar(栏)15与设置面板16可以暂时隐藏。As shown in Figure 8, when the playback control Bar 15 and the setting panel 16 are displayed, if the cursor focus of the ray cursor is within the virtual screen 121 and drag is triggered outside the area shown in the setting panel 16, the virtual screen 121 remains Respond to the drag operation; when dragging, the playback control Bar 15 and the settings panel 16 can be temporarily hidden.
在一些实施例中,若所述虚拟环境的观影模式为180度或360度的全景视频观影模式,则所述视频观影区为预设比例的取景框,在响应于拖拽控制信息之前,所述取景框处于隐藏状态;所述响应于拖拽 控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区,包括:In some embodiments, if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area is a viewfinder of a preset proportion. In response to the drag control information Before, the viewfinder was in a hidden state; in response to dragging The control information takes the initial position of the ray cursor as the origin of the spherical space, fixes the first included angle, and drags the video viewing area along the spherical surface of the spherical space, including:
响应于拖拽控制信息,控制所述取景框显示于所述虚拟环境中;In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment;
以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,在x轴方向上沿所述球面空间的球面拖拽所述取景框。Taking the initial position of the ray cursor as the origin of the spherical space, fixing the first included angle, and dragging the viewfinder frame along the spherical surface of the spherical space in the x-axis direction.
例如,若虚拟环境的观影模式为180度或360度的全景视频观影模式,则所述视频观影区为预设比例的取景框,在响应于拖拽控制信息之前,所述取景框处于隐藏状态。例如,预设比例的取景框可以为16:9的取景框。例如,该取景框可以为1280*720的区域。For example, if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area is a viewing frame with a preset ratio. Before responding to the drag control information, the viewing frame is hidden. For example, the viewfinder frame with a preset ratio may be a 16:9 viewfinder frame. For example, the viewfinder frame may be an area of 1280*720.
例如,在180度或360度的全景视频观影模式中,拖拽触发条件包括射线光标指向视频观影区(即取景框),且射线光标的光标焦点在视频观影区(即取景框)内,该视频观影区(即取景框)可以理解为视频热区范围(该视频热区范围也可以理解为沉浸体验中的视觉聚焦区域),并且对象通过长按交互设备(比如手柄)中的预设按键产生拖拽控制信息,以及虚拟现实设备获取到该拖拽控制信息时,可以响应于拖拽控制信息控制射线光标在x/y轴方向的位移大于或等于xdp,以拖拽视频观影区(即取景框)。For example, in a 180-degree or 360-degree panoramic video viewing mode, the drag trigger conditions include that the ray cursor points to the video viewing area (i.e., the viewfinder frame), and the cursor focus of the ray cursor is in the video viewing area (i.e., the viewfinder frame) Within, the video viewing area (i.e., the viewfinder) can be understood as the video hot zone range (the video hot zone range can also be understood as the visual focus area in the immersive experience), and the object is in the interactive device (such as a handle) by long pressing The preset keys generate drag control information, and when the virtual reality device obtains the drag control information, it can control the displacement of the ray cursor in the x/y axis direction to be greater than or equal to xdp in response to the drag control information to drag the video Viewing area (i.e. viewing frame).
例如,预设按键可以包括Grip(抓握)键、Trigger(扳机)键、A/X键(功能同Trigger)等。For example, the preset keys may include a Grip key, a Trigger key, an A/X key (the same function as Trigger), etc.
例如,在响应于拖拽控制信息之前,除了上述拖拽触发条件,还可以进行设置交互规避条件。例如,交互规避条件包括以下任一种:“设置面板”弹出时拖拽;“设置面板”弹出时产生拖拽;“设置面板”和“沉浸bar”消失时产生拖拽;“沉浸bar”显示时产生拖拽;“沉浸bar”消失时产生拖拽。其中,以上四者选其一进行交互规避。For example, before responding to the drag control information, in addition to the above drag trigger conditions, interaction avoidance conditions can also be set. For example, interaction avoidance conditions include any of the following: dragging when the "Settings Panel" pops up; dragging when the "Settings Panel" pops up; dragging when the "Settings Panel" and "Immersion Bar" disappear; "Immersion Bar" is displayed Dragging occurs when the "immersion bar" disappears; dragging occurs when the "immersion bar" disappears. Among them, choose one of the above four to avoid interaction.
例如,可参阅图3,在180度或360度的全景视频观影模式中,图3中的视频观影区12可以为取景框122,当响应于拖拽控制信息时,拖拽对象为视频观影区12(即取景框122),跟随被拖拽的视频观影区12(即取景框122),在视频观影区12(即取景框122)中显示的视频画面为全景视频中当前拖拽位置对应的视频贴图。在拖拽过程中,固定射线光标11和视频观影区12(即取景框122)的第一夹 角α,通过移动射线光标11对视频观影区12(即取景框122)进行拖拽。For example, referring to FIG. 3, in a 180-degree or 360-degree panoramic video viewing mode, the video viewing area 12 in FIG. 3 may be a viewfinder 122. When responding to the drag control information, the dragged object is the video viewing area 12 (i.e., the viewfinder 122), and the video screen displayed in the video viewing area 12 (i.e., the viewfinder 122) following the dragged video viewing area 12 (i.e., the viewfinder 122) is the video map corresponding to the current drag position in the panoramic video. During the dragging process, the fixed ray cursor 11 and the first clip of the video viewing area 12 (i.e., the viewfinder 122) are fixed. Angle α, by moving the ray cursor 11 to drag the video viewing area 12 (ie, the viewfinder 122).
如图3所示,在180度或360度的全景视频观影模式中,在x轴水平方向上,以射线光标11的起始位置111为球面空间20的原点A,射线光标11为球面空间20的半径,固定射线光标11与视频观影区12(即取景框122)的第一夹角α,沿球面空间20的球面进行水平移动拖拽视频观影区12(即取景框122)。As shown in Figure 3, in the 180-degree or 360-degree panoramic video viewing mode, in the horizontal direction of the x-axis, the starting position 111 of the ray cursor 11 is the origin A of the spherical space 20, and the ray cursor 11 is the spherical space. With a radius of 20, fix the first angle α between the ray cursor 11 and the video viewing area 12 (i.e., the viewing frame 122), move horizontally along the spherical surface of the spherical space 20 and drag the video viewing area 12 (i.e., the viewing frame 122).
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在拖拽过程中,控制所述虚拟环境中除所述取景框之外的区域显示蒙层;During the dragging process, control the area in the virtual environment except the viewfinder to display the mask layer;
在拖拽结束时,隐藏所述取景框与所述蒙层。When the dragging is completed, the viewfinder frame and the mask layer are hidden.
例如,如图9或图10所示,在180度或360度的全景视频观影模式中,在拖拽过程中,控制虚拟环境10中除取景框122之外的区域显示蒙层17,例如,该蒙层17可以为黑色蒙层,也可以为其他颜色蒙层。例如,在拖拽过程中仅取景框122内能看到对应位置的视频贴图,取景框122外的蒙层17均蒙黑显示。在拖拽结束时,隐藏取景框122与蒙层17,在沉浸体验中,整个虚拟环境10的沉浸视频画面均可见。例如,在拖拽过程中,与虚拟现实设备相连接的手柄会发生震动;拖拽结束时,手柄会停止震动。For example, as shown in Figure 9 or Figure 10, in the 180-degree or 360-degree panoramic video viewing mode, during the dragging process, the area in the virtual environment 10 except the viewfinder 122 is controlled to display the mask layer 17, for example , the mask layer 17 can be a black mask layer, or it can be a mask layer of other colors. For example, during the dragging process, only the video map at the corresponding position can be seen within the viewfinder frame 122, and the mask layer 17 outside the viewfinder frame 122 is all displayed in black. At the end of dragging, the viewfinder frame 122 and the mask layer 17 are hidden, and during the immersive experience, the entire immersive video image of the virtual environment 10 is visible. For example, during the dragging process, the handle connected to the virtual reality device will vibrate; when the drag ends, the handle will stop vibrating.
例如,可以通过长按"Home"健生成重置视野指令,以响应于重置视野指令进行重置视野,将取景框122的位置重置(reset)至视野中的默认位置。For example, a reset field of view instruction may be generated by long-pressing the "Home" key to reset the field of view in response to the reset field of view instruction and reset the position of the viewfinder frame 122 to a default position in the field of view.
例如,如图11所示,播控Bar(栏)15、设置面板16显示时,如果射线光标11的光标焦点在取景框122内、且在设置面板16所示区域外触发拖拽,该取景框122依旧响应拖拽操作;拖拽时,播控Bar(栏)15与设置面板16可以暂时隐藏。For example, as shown in Figure 11, when the playback control Bar 15 and the setting panel 16 are displayed, if the cursor focus of the ray cursor 11 is within the viewfinder 122 and a drag is triggered outside the area shown in the settings panel 16, the viewfinder will The frame 122 still responds to the drag operation; when dragging, the playback control Bar 15 and the setting panel 16 can be temporarily hidden.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
在所述取景框的左边界超出所述180度的全景视频的左边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的左边界移动至所述取景框的左边界;或者 Stop dragging when the distance value of the left boundary of the viewfinder beyond the left boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The left border moves to the left border of the viewfinder frame; or
在所述取景框的右边界超出所述180度的全景视频的右边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的右边界移动至所述取景框的右边界。Stop dragging when the distance value of the right boundary of the viewfinder beyond the right boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The right border moves to the right border of the viewfinder frame.
在一些实施例中,在所述响应于重置视野控制指令之前,还包括:In some embodiments, before responding to the reset visual field control instruction, the method further includes:
在所述取景框中显示重置视野提示信息,所述重置视野提示信息用于提示对象输入所述重置视野控制指令。Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
例如,如图12所示,若取景框122的右边界被拖拽超出全景视频的右边界xdp(预设距离值)后,不能继续拖拽,此时取景框122中显示的视频画面是不完整的,可以在取景框中以toast消息提示框1221的方式显示重置视野提示信息,比如该重置视野提示信息的内容为“长按一侧手柄上的Home键重置视野”或者“长按一侧手柄上的○键重置视野”。对象通过该重置视野提示信息的内容长按Home键或○键触发重置视野控制指令,使得虚拟现实设备响应于重置视野控制指令进行重置视野操作,比如控制180度的全景视频的左边界移动至取景框的左边界,或者控制180度的全景视频的右边界移动至取景框的右边界。For example, as shown in Figure 12, if the right boundary of the viewfinder 122 is dragged beyond the right boundary xdp (preset distance value) of the panoramic video, dragging cannot continue. At this time, the video picture displayed in the viewfinder 122 is not Complete, the reset field of view prompt information can be displayed in the viewfinder in the form of a toast message prompt box 1221. For example, the content of the reset field of view prompt message is "Long press the Home button on one handle to reset the field of view" or "Long press the Home button on one handle to reset the field of view" or "Long press the Home button on one handle to reset the field of view". Press the ○ key on one side of the handle to reset the field of view." The subject long presses the Home key or ○ key to trigger the reset vision control instruction through the content of the reset vision prompt information, so that the virtual reality device performs a vision reset operation in response to the reset vision control instruction, such as controlling the left side of a 180-degree panoramic video. Move the border to the left border of the viewfinder, or control the right border of the 180-degree panoramic video to move to the right border of the viewfinder.
例如,toast消息提示框1221用于在取景框122中显示一个重置视野提示信息,该toast消息提示框1221没有任何控制按钮,并且不会获得焦点,经过一段时间后自动消失。For example, the toast message prompt box 1221 is used to display a visual field reset prompt message in the viewfinder 122. The toast message prompt box 1221 does not have any control buttons and will not gain focus and will automatically disappear after a period of time.
例如,重置视野时看到的画面是视频边界回弹至取景框边界的画面,即视频的左边界回弹至取景框的左边界,或者视频的右边界回弹至取景框的右边界。若取景框的边界没有拖拽超出视频的边界,则不会出现toast消息提示框。For example, when resetting the field of view, the video boundary bounces back to the viewfinder boundary, that is, the left boundary of the video bounces back to the left boundary of the viewfinder, or the right boundary of the video bounces back to the right boundary of the viewfinder. If the boundary of the viewfinder is not dragged beyond the boundary of the video, the toast message prompt box will not appear.
例如,该预设距离值为xdp,xdp中的x是个动态值,可以根据用户拖拽的力度来设置回弹的大小,不同场景下可以赋不同的值。For example, the preset distance value is xdp, and x in xdp is a dynamic value. The size of the rebound can be set according to the user's drag strength, and different values can be assigned in different scenarios.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
若所述拖拽控制信息为基于对象操控交互设备的按键生成的拖拽控制信息,则在响应于拖拽控制信息时,向所述交互设备发送震动提示信息,所述震动提示信息用于指示所述交互设备发生震动,以提示拖拽操作被触发。 If the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
例如,交互设备以手柄为例,拖拽操作被触发时,向手柄发送震动提示信息,震动提示信息用于指示手柄发生震动,手柄响应于震动提示信息产生瞬时震动,以向对象提示拖拽操作被触发。例如,震动持续时间为x秒,比如3秒。For example, the interactive device takes a handle as an example. When a drag operation is triggered, a vibration prompt information is sent to the handle. The vibration prompt information is used to indicate that the handle vibrates. The handle generates an instantaneous vibration in response to the vibration prompt information to prompt the object for the drag operation. is triggered. For example, the vibration duration is x seconds, say 3 seconds.
在一些实施例中,所述方法还包括:In some embodiments, the method further includes:
若所述拖拽控制信息为基于对象的裸手手势生成的拖拽控制信息,则在响应于拖拽控制信息时,发出语音提示信息,所述语音提示信息用于提示拖拽操作被触发。If the drag control information is drag control information generated based on the object's bare hand gesture, then in response to the drag control information, voice prompt information is issued, and the voice prompt information is used to prompt that the drag operation is triggered.
例如,拖拽操作被触发时,可以发出语音提示信息,语音提示信息用于提示拖拽操作被触发。For example, when a drag-and-drop operation is triggered, a voice prompt message can be issued, and the voice prompt message is used to prompt that the drag-and-drop operation is triggered.
在一些实施例中,所述方法还包括:在拖拽过程中,隐藏所述射线光标,并显示位于所述视频观影区上的所述射线光标的光标焦点。In some embodiments, the method further includes: during the dragging process, hiding the ray cursor and displaying the cursor focus of the ray cursor located on the video viewing area.
例如,在2D视频观影模式中的拖拽过程中:可以隐藏射线光标11,只显示光标焦点112。还可以隐藏播放bar(栏)15;还可以以全屏视频的画面显示非normal(标准)态;虚拟屏幕121中显示的视频播放/暂停状态不变。例如,在2D视频观影模式中的拖拽过程中,显示虚拟环境如图7所示,视频观影区12(即虚拟屏幕121)内显示的视频画面为2D全屏视频。For example, during the dragging process in 2D video viewing mode: the ray cursor 11 can be hidden and only the cursor focus 112 is displayed. The play bar 15 can also be hidden; the non-normal (standard) state can also be displayed in a full-screen video; the video play/pause state displayed on the virtual screen 121 remains unchanged. For example, during the dragging process in the 2D video viewing mode, the virtual environment is displayed as shown in Figure 7, and the video picture displayed in the video viewing area 12 (ie, the virtual screen 121) is a 2D full-screen video.
例如,在180度或360度的全景视频观影模式中的拖拽过程中:可以隐藏射线光标11,只显示光标焦点112。取景框122中显示的视频播放/暂停状态不变。例如,在180度或360度的全景视频观影模式中的拖拽过程中,显示虚拟环境如图9所示,视频观影区12(即取景框122)内显示的视频画面为拖拽位置对应的视频贴图。For example, during the dragging process in the 180-degree or 360-degree panoramic video viewing mode: the ray cursor 11 can be hidden, and only the cursor focus 112 is displayed. The video play/pause state displayed in the viewfinder 122 remains unchanged. For example, during the dragging process in the 180-degree or 360-degree panoramic video viewing mode, the virtual environment is displayed as shown in FIG. 9, and the video screen displayed in the video viewing area 12 (i.e., the viewfinder 122) is the video map corresponding to the dragging position.
步骤130,基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。Step 130: Determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
在一些实施例中,若所述虚拟环境的观影模式为二维视频观影模式,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:In some embodiments, if the viewing mode of the virtual environment is a two-dimensional video viewing mode, the video viewing area is determined based on the current drag position of the video viewing area in the virtual environment. The video screen displayed in the area includes:
基于所述虚拟屏幕位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面为全屏播放的二维视频,其中,所述虚 拟屏幕在所述当前拖拽位置显示的视频画面与所述虚拟屏幕在拖拽起始位置显示的视频画面均为全屏播放的二维视频。Based on the current drag position of the virtual screen in the virtual environment, it is determined that the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen The video picture displayed by the virtual screen at the current dragging position and the video picture displayed by the virtual screen at the dragging starting position are both two-dimensional videos played in full screen.
例如,在2D视频观影模式中,对象将长按的预设按键松手时,产生拖拽结束指令,以使得虚拟现实设备响应于拖拽结束指令,取消拖拽,此时虚拟屏幕停留在当前拖拽位置。如图13所示,拖拽结束后,虚拟屏幕的整个全屏模式的“UI显示”作为整体,保持“全屏bar(栏)”、“全屏视频”、“设置面板”的相对位置不变。该全屏视频为全屏播放的二维视频。For example, in the 2D video viewing mode, when the subject releases the long-pressed preset button, a drag end command is generated, so that the virtual reality device responds to the drag end command and cancels the drag. At this time, the virtual screen stays at the current Drag position. As shown in Figure 13, after dragging is completed, the "UI display" of the entire full-screen mode of the virtual screen as a whole keeps the relative positions of the "full-screen bar (bar)", "full-screen video", and "setting panel" unchanged. The full-screen video is a two-dimensional video played in full screen.
在拖拽前、拖拽过程中及拖拽后,二维视频观影模式在视频观影区中显示的视频画面是均为全屏视频,在拖拽过程中视频的播放状态或暂停状态不变。Before dragging, during dragging, and after dragging, the video images displayed in the video viewing area in the 2D video viewing mode are all full-screen videos. During the dragging process, the playback or pause state of the video remains unchanged. .
例如,在2D视频观影模式中,拖拽结束并退出虚拟环境后,不记录当次拖拽位置。下次进入虚拟环境时,显示屏显示的是默认状态。例如,默认状态表示显示系统的初始状态。比如当用户佩戴头戴式的虚拟现实设备躺着看电影时,会把显示的视频画面调成倾斜的45°画面(拖拽后的视频画面);但是下次佩戴虚拟现实设备时,显示的视频画面为正常的90°画面(默认状态)。For example, in the 2D video viewing mode, after the drag is completed and the virtual environment is exited, the current drag position is not recorded. The next time you enter the virtual environment, the display shows the default state. For example, the default state represents the initial state of the display system. For example, when a user wears a head-mounted virtual reality device and lies down to watch a movie, the displayed video image will be adjusted to a tilted 45° image (the video image after dragging); but the next time the user wears the virtual reality device, the displayed video image will be The video screen is a normal 90° screen (default state).
例如,可以不记录虚拟屏幕的位置拖拽信息,但是可以记射线光标的缩放信息。该缩放信息包括射线光标的缩放尺寸,可以通过射线光标的缩放尺寸并基于屏幕中心点将虚拟屏幕拉近或拉远。For example, the position drag information of the virtual screen may not be recorded, but the zoom information of the ray cursor may be recorded. The scaling information includes the scaling size of the ray cursor, through which the virtual screen can be zoomed in or out based on the screen center point.
在一些实施例中,若所述虚拟环境的观影模式为180度或360度的全景视频观影模式,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:In some embodiments, if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the determination is based on the current drag position of the video viewing area in the virtual environment. The video images displayed in the video viewing area include:
基于所述取景框位于所述虚拟环境中的当前拖拽位置,确定所述取景框显示的视频画面为所述全景视频中所述当前拖拽位置对应的视频贴图,其中,所述取景框在所述当前拖拽位置显示的视频画面与所述取景框在拖拽起始位置显示的视频画面不同。Based on the current drag position of the viewfinder frame in the virtual environment, it is determined that the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
例如,在180度或360度的全景视频观影模式中,对象将长按的预设按键松手时,产生拖拽结束指令,以使得虚拟现实设备响应于拖拽结束指令,取消拖拽,此时取景框停留在当前拖拽位置,且取景框 以Alpha变化的方式消失,取景框显示的视频画面留在当前拖拽位置进行播放,取景框显示的视频画面为全景视频中当前拖拽位置对应的视频贴图,其中,取景框在当前拖拽位置显示的视频画面与取景框在拖拽起始位置显示的视频画面不同。For example, in a 180-degree or 360-degree panoramic video viewing mode, when the subject releases the long-pressed preset button, a drag end command is generated, so that the virtual reality device responds to the drag end command and cancels the drag. When the viewfinder stays at the current dragging position, and the viewfinder Disappears in the way of Alpha change, and the video picture displayed in the viewfinder remains at the current dragging position for playback. The video picture displayed in the viewfinder is the video map corresponding to the current dragging position in the panoramic video, in which the viewfinder is at the current dragging position. The video screen displayed is different from the video screen displayed at the drag starting position of the viewfinder.
例如,VR180°的沉浸观影模式下,是个180°的全景图,相当于一个半球,在对象戴上虚拟现实设备后可以以180°范围观看全景视频。当取景框显示的视频画面内容根据取景框的移动被拖拽出180°可呈现的范围之外时,会存在一个黑色蒙板或Alpha渐变,以跟观影画面进行融入,比如场景是纯黑的,观影画面是彩色的,将视频画面内容拖出180°之外时,会存在一个从彩色观影画面过渡到纯黑场景的一个过渡状态,这个过渡状态即为Alpha变化。For example, in the VR180° immersive viewing mode, it is a 180° panorama, which is equivalent to a hemisphere. After the subject wears the virtual reality device, he or she can watch the panoramic video in a 180° range. When the video content displayed in the viewfinder is dragged out of the 180° displayable range according to the movement of the viewfinder, there will be a black mask or Alpha gradient to blend with the viewing screen. For example, if the scene is pure black Yes, the viewing screen is in color. When the video content is dragged out 180°, there will be a transition state from the color viewing screen to a pure black scene. This transition state is the Alpha change.
例如,在180度或360度的全景视频观影模式中,拖拽结束并退出虚拟环境后,不记录当次拖拽位置。下次进入虚拟环境时,显示屏显示的是默认状态。For example, in the 180-degree or 360-degree panoramic video viewing mode, after the drag is completed and the virtual environment is exited, the drag position will not be recorded. The next time you enter the virtual environment, the display shows the default state.
上述所有的技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。All of the above technical solutions can be arbitrarily combined to form optional embodiments of the present disclosure, which will not be described in detail here.
本公开实施例通过显示虚拟环境,虚拟环境中呈现有射线光标和视频观影区,其中,射线光标指向视频观影区方向,且与视频观影区之间形成第一夹角;响应于拖拽控制信息,并基于射线光标的初始位置与第一夹角,拖拽视频观影区;基于视频观影区位于虚拟环境中的当前拖拽位置,确定视频观影区显示的视频画面。本公开实施例设计了针对2D视频、VR180视频、VR360视频的三维空间的拖拽方式,可以让用户体验到VR空间的魅力和在视频领域不同的观影视角,提升了在虚拟现实空间观影的沉浸式体验。Embodiments of the present disclosure display a virtual environment, and a ray cursor and a video viewing area are presented in the virtual environment, where the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area; in response to dragging Drag control information, and drag the video viewing area based on the initial position of the ray cursor and the first angle; determine the video picture displayed in the video viewing area based on the current dragging position of the video viewing area in the virtual environment. The disclosed embodiment designs a drag and drop method for three-dimensional space of 2D video, VR180 video, and VR360 video, which allows users to experience the charm of VR space and different viewing angles in the video field, and improves the viewing experience in virtual reality space. immersive experience.
为便于更好的实施本公开实施例的虚拟环境中的观影画面调整方法,本公开实施例还提供一种虚拟环境中的观影画面调整装置。请参阅图14,图14为本公开实施例提供的虚拟环境中的观影画面调整装置的结构示意图。其中,该虚拟环境中的观影画面调整装置200可以包括:In order to facilitate better implementation of the viewing image adjustment method in the virtual environment of the embodiment of the present disclosure, the embodiment of the present disclosure also provides a device for adjusting the viewing image in the virtual environment. Please refer to FIG. 14 , which is a schematic structural diagram of a viewing image adjustment device in a virtual environment provided by an embodiment of the present disclosure. Wherein, the viewing screen adjustment device 200 in the virtual environment may include:
显示单元210,用于显示虚拟环境,所述虚拟环境中呈现有射线 光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;The display unit 210 is used to display a virtual environment, wherein the virtual environment presents a ray A cursor and a video viewing area, wherein the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area;
控制单元220,用于响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;The control unit 220 is configured to drag the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle;
确定单元230,用于基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。The determining unit 230 is configured to determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
在一些实施例中,所述控制单元220,具体用于:In some embodiments, the control unit 220 is specifically used to:
响应于拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区。In response to the drag control information, the initial position of the ray cursor is taken as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
在一些实施例中,若所述虚拟环境的观影模式为二维视频观影模式,则所述视频观影区为虚拟屏幕,在响应于拖拽控制信息之前,所述虚拟屏幕处于显示状态;In some embodiments, if the viewing mode of the virtual environment is a two-dimensional video viewing mode, the video viewing area is a virtual screen, and the virtual screen is in a display state before responding to the drag control information. ;
所述控制单元220在沿所述球面空间的球面拖拽所述视频观影区时,具体用于:在x轴方向与y轴方向中的至少一个方向上,沿所述球面空间的球面拖拽所述虚拟屏幕。When the control unit 220 drags the video viewing area along the spherical surface of the spherical space, it is specifically configured to: drag the video viewing area along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction. Drag the virtual screen.
在一些实施例中,所述控制单元220,还用于:若在y轴方向上沿所述球面空间的球面拖拽所述虚拟屏幕至所述y轴方向的顶部或底部,则控制所述虚拟屏幕围绕所述虚拟屏幕的中心做180度翻转。In some embodiments, the control unit 220 is also configured to: if the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, control the The virtual screen is flipped 180 degrees around the center of the virtual screen.
在一些实施例中,所述控制单元220,还用于:若沿所述球面空间的球面拖拽所述虚拟屏幕至所述虚拟环境中的虚拟地面,且所述虚拟屏幕与所述虚拟地面产生穿模情况,则隐藏所述虚拟地面。In some embodiments, the control unit 220 is also configured to: if the virtual screen is dragged along the spherical surface of the spherical space to the virtual ground in the virtual environment, and the virtual screen and the virtual ground If a mold-breaking situation occurs, the virtual ground will be hidden.
在一些实施例中,所述控制单元220,还用于:In some embodiments, the control unit 220 is also used to:
在拖拽过程中,控制所述虚拟屏幕的边框高亮显示;During the dragging process, control the highlighted display of the border of the virtual screen;
在拖拽结束时,控制所述虚拟屏幕的边框恢复常态显示。When the dragging ends, the border of the virtual screen is controlled to return to normal display.
在一些实施例中,所述确定单元230,具体用于:In some embodiments, the determining unit 230 is specifically used to:
基于所述虚拟屏幕位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面为全屏播放的二维视频,其中,所述虚拟屏幕在所述当前拖拽位置显示的视频画面与所述虚拟屏幕在拖拽起始位置显示的视频画面均为全屏播放的二维视频。 Based on the current drag position of the virtual screen in the virtual environment, it is determined that the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen is displayed at the current drag position. The video picture and the video picture displayed on the virtual screen at the drag starting position are both two-dimensional videos played in full screen.
在一些实施例中,若所述虚拟环境的观影模式为180度或360度的全景视频观影模式,则所述视频观影区为预设比例的取景框,在响应于拖拽控制信息之前,所述取景框处于隐藏状态;In some embodiments, if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area is a viewfinder of a preset proportion. In response to the drag control information Previously, the viewfinder frame was hidden;
所述控制单元220,用于:The control unit 220 is used for:
响应于拖拽控制信息,控制所述取景框显示于所述虚拟环境中;In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment;
以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,在x轴方向上沿所述球面空间的球面拖拽所述取景框。Taking the initial position of the ray cursor as the origin of the spherical space, fixing the first included angle, and dragging the viewfinder frame along the spherical surface of the spherical space in the x-axis direction.
在一些实施例中,所述控制单元220,还用于:In some embodiments, the control unit 220 is further configured to:
在拖拽过程中,控制所述虚拟环境中除所述取景框之外的区域显示蒙层;During the dragging process, control the area in the virtual environment except the viewfinder to display the mask layer;
在拖拽结束时,隐藏所述取景框与所述蒙层。When the dragging is completed, the viewfinder frame and the mask layer are hidden.
在一些实施例中,所述控制单元220,还用于:In some embodiments, the control unit 220 is also used to:
在所述取景框的左边界超出所述180度的全景视频的左边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的左边界移动至所述取景框的左边界;或者Stop dragging when the distance value of the left boundary of the viewfinder beyond the left boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The left border moves to the left border of the viewfinder frame; or
在所述取景框的右边界超出所述180度的全景视频的右边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的右边界移动至所述取景框的右边界。Stop dragging when the distance value of the right boundary of the viewfinder beyond the right boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The right border moves to the right border of the viewfinder frame.
在一些实施例中,所述控制单元220在响应于重置视野控制指令之前,还用于:In some embodiments, before responding to the reset visual field control instruction, the control unit 220 is also configured to:
在所述取景框中显示重置视野提示信息,所述重置视野提示信息用于提示对象输入所述重置视野控制指令。Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
在一些实施例中,所述确定单元230,具体用于:In some embodiments, the determining unit 230 is specifically used to:
基于所述取景框位于所述虚拟环境中的当前拖拽位置,确定所述取景框显示的视频画面为所述全景视频中所述当前拖拽位置对应的视频贴图,其中,所述取景框在所述当前拖拽位置显示的视频画面与所述取景框在拖拽起始位置显示的视频画面不同。Based on the current drag position of the viewfinder frame in the virtual environment, it is determined that the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
在一些实施例中,所述控制单元220,还用于:在拖拽过程中,隐藏所述射线光标,并显示位于所述视频观影区上的所述射线光标的光标焦点。 In some embodiments, the control unit 220 is further configured to hide the ray cursor and display the cursor focus of the ray cursor located on the video viewing area during the dragging process.
在一些实施例中,所述控制单元220,还用于:In some embodiments, the control unit 220 is further configured to:
若所述拖拽控制信息为基于对象操控交互设备的按键生成的拖拽控制信息,则在响应于拖拽控制信息时,向所述交互设备发送震动提示信息,所述震动提示信息用于指示所述交互设备发生震动,以提示拖拽操作被触发。If the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is used to indicate The interactive device vibrates to indicate that the drag operation is triggered.
在一些实施例中,所述控制单元220,还用于:若所述拖拽控制信息为基于对象的裸手手势生成的拖拽控制信息,则在响应于拖拽控制信息时,发出语音提示信息,所述语音提示信息用于提示拖拽操作被触发。In some embodiments, the control unit 220 is also configured to: if the drag control information is drag control information generated based on the object's bare hand gesture, when responding to the drag control information, issue a voice prompt Information, the voice prompt information is used to prompt that the drag operation is triggered.
上述虚拟环境中的观影画面调整装置200中的各个单元可全部或部分通过软件、硬件及其组合来实现。上述各个单元可以以硬件形式内嵌于或独立于虚拟现实设备中的处理器中,也可以以软件形式存储于虚拟现实设备中的存储器中,以便于处理器调用执行上述各个单元对应的操作。Each unit in the above-mentioned viewing image adjustment device 200 in the virtual environment can be implemented in whole or in part by software, hardware, and combinations thereof. Each of the above-mentioned units may be embedded in or independent of the processor in the virtual reality device in the form of hardware, or may be stored in the memory of the virtual reality device in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned units.
虚拟环境中的观影画面调整装置200,可以集成在具备储存器并安装有处理器而具有运算能力的终端或服务器中,或者该虚拟环境中的观影画面调整装置200为该终端或服务器。The viewing image adjustment device 200 in the virtual environment can be integrated in a terminal or server that has a storage device and a processor and has computing capabilities, or the viewing image adjustment device 200 in the virtual environment is the terminal or server.
在一些实施例中,本公开还提供了一种虚拟现实设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。In some embodiments, the present disclosure also provides a virtual reality device, including a memory and a processor. A computer program is stored in the memory. When the processor executes the computer program, it implements the steps in the above method embodiments.
如图15所示,图15为本公开实施例提供的虚拟现实设备的结构示意图,该虚拟现实设备300通常可以提供为眼镜、头盔式显示器(Head Mount Display,HMD)、隐形眼镜的形态,以用于实现视觉感知和其他形式的感知,当然虚拟现实设备实现的形态不限于此,根据需要可以进一步小型化或大型化。该虚拟现实设备300可以包括但不限于以下的构成:As shown in Figure 15, Figure 15 is a schematic structural diagram of a virtual reality device provided by an embodiment of the present disclosure. The virtual reality device 300 can usually be provided in the form of glasses, a helmet-mounted display (HMD), or a contact lens. It is used to realize visual perception and other forms of perception. Of course, the form of virtual reality equipment is not limited to this, and can be further miniaturized or enlarged as needed. The virtual reality device 300 may include but is not limited to the following components:
检测模块301:使用各种传感器检测用户的操作命令,并作用于虚拟环境,如跟随用户的视线而不断更新在显示屏上显示的影像,实现用户与虚拟和场景的交互,例如基于检测到的用户头部的转动方向来不断更新现实内容。 Detection module 301: Use various sensors to detect the user's operation commands and act on the virtual environment, such as continuously updating the image displayed on the display screen following the user's line of sight, to achieve user interaction with the virtual and scene, for example, continuously updating the real content based on the detected rotation direction of the user's head.
反馈模块302:接收来自传感器的数据,为用户提供实时反馈;其中,该反馈模块302可以为用于显示图形用户界面,比如在该图形用户界面上显示虚拟环境。例如,该反馈模块302可以包括显示屏幕等。Feedback module 302: receives data from sensors and provides real-time feedback to the user; wherein, the feedback module 302 may be used to display a graphical user interface, such as displaying a virtual environment on the graphical user interface. For example, the feedback module 302 may include a display screen or the like.
传感器303:一方面接受来自用户的操作命令,并将其作用于虚拟环境;另一方面将操作后产生的结果以各种反馈的形式提供给用户。Sensor 303: On the one hand, it accepts operation commands from the user and applies them to the virtual environment; on the other hand, it provides the results of the operation to the user in the form of various feedbacks.
控制模块304:对传感器和各种输入/输出装置进行控制,包括获得用户的数据(如动作、语音)和输出感知数据,如图像、振动、温度和声音等,对用户、虚拟环境和现实世界产生作用。Control module 304: Controls sensors and various input/output devices, including obtaining user data (such as actions, voice) and output sensing data, such as images, vibrations, temperature and sound, etc., to the user, virtual environment and real world Have an effect.
建模模块305:构造虚拟环境的三维模型,还可以包括三维模型中的声音、触感等各种反馈机制。Modeling module 305: Constructs a three-dimensional model of the virtual environment, which can also include various feedback mechanisms such as sound and touch in the three-dimensional model.
在本公开实施例中,可以通过所述建模模块305构建虚拟环境,并通过所述反馈模块302显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;然后通过控制模块304响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;然后控制模块304基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。In the embodiment of the present disclosure, a virtual environment can be constructed through the modeling module 305, and the virtual environment can be displayed through the feedback module 302. A ray cursor and a video viewing area are presented in the virtual environment, wherein the ray The cursor points in the direction of the video viewing area and forms a first angle with the video viewing area; then the control module 304 responds to the drag control information and based on the initial position of the ray cursor and the At the first included angle, drag the video viewing area; then the control module 304 determines the video picture displayed in the video viewing area based on the current dragging position of the video viewing area in the virtual environment.
在一些实施例中,如图16所示,图16为本公开实施例提供的虚拟现实设备的另一结构示意图,虚拟现实设备300还包括有一个或者一个以上处理核心的处理器310、有一个或一个以上计算机可读存储介质的存储器320及存储在存储器320上并可在处理器上运行的计算机程序。其中,处理器310与存储器320电性连接。本领域技术人员可以理解,图中示出的虚拟现实设备结构并不构成对虚拟现实设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。In some embodiments, as shown in Figure 16, which is another structural schematic diagram of a virtual reality device provided by an embodiment of the present disclosure, the virtual reality device 300 also includes a processor 310 with one or more processing cores. Or the memory 320 of more than one computer-readable storage medium and the computer program stored on the memory 320 and executable on the processor. Among them, the processor 310 is electrically connected to the memory 320. Those skilled in the art can understand that the structure of the virtual reality device shown in the figures does not constitute a limitation on the virtual reality device, and may include more or fewer components than shown in the figures, or combine certain components, or arrange different components. .
处理器310是虚拟现实设备300的控制中心,利用各种接口和线路连接整个虚拟现实设备300的各个部分,通过运行或加载存储在存储器320内的软件程序和/或模块,以及调用存储在存储器320内的数据,执行虚拟现实设备300的各种功能和处理数据,从而对虚拟现 实设备300进行整体监控。The processor 310 is the control center of the virtual reality device 300. It uses various interfaces and lines to connect various parts of the entire virtual reality device 300, by running or loading software programs and/or modules stored in the memory 320, and by calling the software programs and/or modules stored in the memory 320. The data in 320 performs various functions of the virtual reality device 300 and processes the data, thereby performing virtual reality processing. The actual device 300 performs overall monitoring.
在本公开实施例中,虚拟现实设备300中的处理器310会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器320中,并由处理器310来运行存储在存储器320中的应用程序,从而实现各种功能:In the embodiment of the present disclosure, the processor 310 in the virtual reality device 300 will follow the following steps to load instructions corresponding to the processes of one or more application programs into the memory 320, and the processor 310 will run the instructions stored in application in memory 320 to implement various functions:
显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。Display a virtual environment, with a ray cursor and a video viewing area presented in the virtual environment, wherein the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area; In response to the drag control information, and based on the initial position of the ray cursor and the first angle, drag the video viewing area; based on the current dragging position of the video viewing area in the virtual environment position to determine the video picture displayed in the video viewing area.
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。For the specific implementation of each of the above operations, please refer to the previous embodiments and will not be described again here.
在一些实施例中,处理器310可以包括检测模块301、控制模块304和建模模块305。In some embodiments, processor 310 may include detection module 301, control module 304, and modeling module 305.
在一些实施例中,如图16所示,虚拟现实设备300还包括:射频电路306、音频电路307以及电源308。其中,处理器310分别与存储器320、反馈模块302、传感器303、射频电路306、音频电路307以及电源308电性连接。本领域技术人员可以理解,图15或图16中示出的虚拟现实设备结构并不构成对虚拟现实设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。In some embodiments, as shown in FIG16 , the virtual reality device 300 further includes: a radio frequency circuit 306, an audio circuit 307, and a power supply 308. The processor 310 is electrically connected to the memory 320, the feedback module 302, the sensor 303, the radio frequency circuit 306, the audio circuit 307, and the power supply 308, respectively. Those skilled in the art will appreciate that the structure of the virtual reality device shown in FIG15 or FIG16 does not constitute a limitation on the virtual reality device, and may include more or less components than shown in the figure, or combine certain components, or arrange the components differently.
射频电路306可用于收发射频信号,以通过无线通信与网络设备或其他虚拟现实设备建立无线通讯,与网络设备或其他虚拟现实设备之间收发信号。The radio frequency circuit 306 can be used to send and receive radio frequency signals to establish wireless communication with network equipment or other virtual reality equipment through wireless communication, and to send and receive signals with network equipment or other virtual reality equipment.
音频电路307可以用于通过扬声器、传声器提供用户与虚拟现实设备之间的音频接口。音频电路307可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路307接收后转换为音频数据,再将音频数据输出处理器301处理后,经射频电路306以发送给比如另一虚拟现实设备,或者将音频数据输出至存储器以便进一步处理。音频电路307还可能包括耳塞插孔,以提供外设耳机与 虚拟现实设备的通信。The audio circuit 307 can be used to provide an audio interface between the user and the virtual reality device through speakers and microphones. The audio circuit 307 can transmit the electrical signal converted from the received audio data to the speaker, which converts it into a sound signal and outputs it; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received and converted by the audio circuit 307 The audio data is processed by the audio data output processor 301 and then sent to, for example, another virtual reality device via the radio frequency circuit 306, or the audio data is output to the memory for further processing. Audio circuit 307 may also include an earplug jack to provide peripheral headphones with Communication with virtual reality devices.
电源308用于给虚拟现实设备300的各个部件供电。The power supply 308 is used to power various components of the virtual reality device 300 .
尽管图15或图16中未示出,虚拟现实设备300还可以包括摄像头、无线保真模块、蓝牙模块、输入模块等,在此不再赘述。Although not shown in FIG. 15 or FIG. 16 , the virtual reality device 300 may further include a camera, a wireless fidelity module, a Bluetooth module, an input module, etc., which will not be described in detail here.
在一些实施例中,本公开还提供了一种计算机可读存储介质,用于存储计算机程序。该计算机可读存储介质可应用于虚拟现实设备或服务器,并且该计算机程序使得虚拟现实设备或服务器执行本公开实施例中的虚拟环境中的观影画面调整方法中的相应流程,为了简洁,在此不再赘述。In some embodiments, the present disclosure also provides a computer-readable storage medium for storing a computer program. The computer-readable storage medium can be applied to a virtual reality device or server, and the computer program causes the virtual reality device or server to execute the corresponding process in the viewing picture adjustment method in a virtual environment in the embodiment of the present disclosure. For simplicity, in This will not be described again.
在一些实施例中,本公开还提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。虚拟现实设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得虚拟现实设备执行本公开实施例中的虚拟环境中的观影画面调整方法中的相应流程,为了简洁,在此不再赘述。In some embodiments, the present disclosure also provides a computer program product including a computer program stored in a computer-readable storage medium. The processor of the virtual reality device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the virtual reality device executes the corresponding process in the viewing picture adjustment method in the virtual environment in the embodiment of the present disclosure, For the sake of brevity, no further details will be given here.
本公开还提供了一种计算机程序,该计算机程序包括计算机程序,计算机程序存储在计算机可读存储介质中。虚拟现实设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得虚拟现实设备执行本公开实施例中的虚拟环境中的观影画面调整方法中的相应流程,为了简洁,在此不再赘述。The present disclosure also provides a computer program, the computer program includes a computer program, and the computer program is stored in a computer-readable storage medium. The processor of the virtual reality device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the virtual reality device executes the corresponding process in the viewing picture adjustment method in the virtual environment in the embodiment of the present disclosure, For the sake of brevity, no further details will be given here.
应理解,本公开实施例的处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件译 码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。It should be understood that the processor in the embodiment of the present disclosure may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method embodiment can be completed through an integrated logic circuit of hardware in the processor or instructions in the form of software. The above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available processors. Programmed logic devices, discrete gate or transistor logic devices, discrete hardware components. Each disclosed method, step and logical block diagram in the embodiment of the present disclosure can be implemented or executed. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc. The steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly embodied as hardware translation. The execution of the code processor is completed, or the execution is completed using a combination of hardware and software modules in the decoding processor. The software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
可以理解,本公开实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in the embodiments of the present disclosure may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. Among them, non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which is used as an external cache. By way of illustration, but not limitation, many forms of RAM are available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synchlink DRAM, SLDRAM) ) and direct memory bus random access memory (Direct Rambus RAM, DR RAM). It should be noted that the memory of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered to be beyond the scope of this disclosure.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。 Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working processes of the systems, devices and units described above can be referred to the corresponding processes in the foregoing method embodiments, and will not be described again here.
在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this disclosure, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in the embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能若以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台虚拟现实设备(可以是个人计算机,服务器)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present disclosure can be embodied in essence or part of the technical solution in the form of a software product. The computer software product is stored in a storage medium and includes a number of instructions to enable a virtual reality device ( It may be a personal computer or a server) that executes all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。 The above are only specific embodiments of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present disclosure. should be covered by the protection scope of this disclosure. Therefore, the protection scope of the present disclosure should be determined by the protection scope of the claims.

Claims (19)

  1. 一种虚拟环境中的观影画面调整方法,包括:A method for adjusting movie viewing images in a virtual environment, including:
    显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;Display a virtual environment, with a ray cursor and a video viewing area presented in the virtual environment, wherein the ray cursor points in the direction of the video viewing area and forms a first angle with the video viewing area;
    响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;In response to the drag control information and based on the initial position of the ray cursor and the first angle, drag the video viewing area;
    基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。Based on the current drag position of the video viewing area in the virtual environment, the video picture displayed in the video viewing area is determined.
  2. 如权利要求1所述的虚拟环境中的观影画面调整方法,其中,所述响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区,包括:The method for adjusting viewing images in a virtual environment as claimed in claim 1, wherein the video is dragged in response to the drag control information and based on the initial position of the ray cursor and the first angle. Movie viewing area, including:
    响应于所述拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区。In response to the drag control information, the initial position of the ray cursor is used as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space.
  3. 如权利要求2所述的虚拟环境中的观影画面调整方法,其中,若所述虚拟环境的观影模式为二维视频观影模式,则所述视频观影区为虚拟屏幕,在响应于拖拽控制信息之前,所述虚拟屏幕处于显示状态;The viewing picture adjustment method in a virtual environment as claimed in claim 2, wherein if the viewing mode of the virtual environment is a two-dimensional video viewing mode, the video viewing area is a virtual screen, and in response to Before dragging control information, the virtual screen is in a display state;
    所述沿所述球面空间的球面拖拽所述视频观影区,包括:The dragging of the video viewing area along the spherical surface of the spherical space includes:
    在x轴方向与y轴方向中的至少一个方向上,沿所述球面空间的球面拖拽所述虚拟屏幕。Drag the virtual screen along the spherical surface of the spherical space in at least one direction of the x-axis direction and the y-axis direction.
  4. 如权利要求3所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting movie viewing images in a virtual environment as claimed in claim 3, further comprising:
    若在所述y轴方向上沿所述球面空间的球面拖拽所述虚拟屏幕至所述y轴方向的顶部或底部,则控制所述虚拟屏幕围绕所述虚拟屏幕的中心做180度翻转。If the virtual screen is dragged along the spherical surface of the spherical space in the y-axis direction to the top or bottom of the y-axis direction, the virtual screen is controlled to flip 180 degrees around the center of the virtual screen.
  5. 如权利要求3或4所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting the viewing picture in a virtual environment as claimed in claim 3 or 4, further comprising:
    若沿所述球面空间的球面拖拽所述虚拟屏幕至所述虚拟环境中 的虚拟地面,且所述虚拟屏幕与所述虚拟地面产生穿模情况,则隐藏所述虚拟地面。If the virtual screen is dragged along the spherical surface of the spherical space into the virtual environment The virtual ground is hidden, and the virtual screen and the virtual ground are in a mold-crossing situation, the virtual ground is hidden.
  6. 如权利要求3-5任一项所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting the viewing picture in the virtual environment according to any one of claims 3 to 5, further comprising:
    在拖拽过程中,控制所述虚拟屏幕的边框高亮显示;During the dragging process, control the highlighted display of the border of the virtual screen;
    在拖拽结束时,控制所述虚拟屏幕的边框恢复常态显示。When the dragging is completed, the frame of the virtual screen is controlled to return to normal display.
  7. 如权利要求3-6任一项所述的虚拟环境中的观影画面调整方法,其中,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:The method for adjusting the viewing picture in a virtual environment according to any one of claims 3 to 6, wherein the determination of the video viewing area is based on the current drag position of the video viewing area in the virtual environment. The video images displayed in the shadow area include:
    基于所述虚拟屏幕位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面为全屏播放的二维视频,其中,所述虚拟屏幕在所述当前拖拽位置显示的视频画面与所述虚拟屏幕在拖拽起始位置显示的视频画面均为全屏播放的二维视频。Based on the current drag position of the virtual screen in the virtual environment, it is determined that the video picture displayed in the video viewing area is a two-dimensional video played in full screen, wherein the virtual screen is displayed at the current drag position. The video picture and the video picture displayed on the virtual screen at the drag starting position are both two-dimensional videos played in full screen.
  8. 如权利要求2所述的虚拟环境中的观影画面调整方法,其中,若所述虚拟环境的观影模式为180度或360度的全景视频观影模式,则所述视频观影区为预设比例的取景框,在响应于拖拽控制信息之前,所述取景框处于隐藏状态;The method for adjusting viewing images in a virtual environment as claimed in claim 2, wherein if the viewing mode of the virtual environment is a 180-degree or 360-degree panoramic video viewing mode, the video viewing area is a pre-set viewing area. Assume a proportional viewfinder frame, and before responding to the drag control information, the viewfinder frame is in a hidden state;
    所述响应于所述拖拽控制信息,以所述射线光标的初始位置为球面空间的原点,并固定所述第一夹角,沿所述球面空间的球面拖拽所述视频观影区,包括:In response to the drag control information, the initial position of the ray cursor is used as the origin of the spherical space, and the first included angle is fixed, and the video viewing area is dragged along the spherical surface of the spherical space, include:
    响应于所述拖拽控制信息,控制所述取景框显示于所述虚拟环境中;In response to the drag control information, control the viewfinder frame to be displayed in the virtual environment;
    以所述射线光标的初始位置为所述球面空间的原点,并固定所述第一夹角,在x轴方向上沿所述球面空间的球面拖拽所述取景框。Taking the initial position of the ray cursor as the origin of the spherical space, fixing the first included angle, and dragging the viewfinder frame along the spherical surface of the spherical space in the x-axis direction.
  9. 如权利要求8所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting movie viewing images in a virtual environment as claimed in claim 8, further comprising:
    在拖拽过程中,控制所述虚拟环境中除所述取景框之外的区域显示蒙层;During the dragging process, control the area in the virtual environment except the viewfinder to display the mask layer;
    在拖拽结束时,隐藏所述取景框与所述蒙层。When the dragging is completed, the viewfinder frame and the mask layer are hidden.
  10. 如权利要求8或9所述的虚拟环境中的观影画面调整方法,还包括: The method for adjusting the viewing picture in a virtual environment as claimed in claim 8 or 9, further comprising:
    在所述取景框的左边界超出所述180度的全景视频的左边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的左边界移动至所述取景框的左边界;或者Stop dragging when the distance value of the left boundary of the viewfinder beyond the left boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The left border moves to the left border of the viewfinder frame; or
    在所述取景框的右边界超出所述180度的全景视频的右边界的距离值达到预设距离值时停止拖拽,并响应于重置视野控制指令,控制所述180度的全景视频的右边界移动至所述取景框的右边界。Stop dragging when the distance value of the right boundary of the viewfinder beyond the right boundary of the 180-degree panoramic video reaches a preset distance value, and in response to the reset field of view control instruction, control the 180-degree panoramic video The right border moves to the right border of the viewfinder frame.
  11. 如权利要求10所述的虚拟环境中的观影画面调整方法,其中,在所述响应于重置视野控制指令之前,所述虚拟环境中的观影画面调整方法还包括:The method for adjusting the viewing image in the virtual environment as claimed in claim 10, wherein before the response to the reset field of view control instruction, the method for adjusting the viewing image in the virtual environment further includes:
    在所述取景框中显示重置视野提示信息,所述重置视野提示信息用于提示对象输入所述重置视野控制指令。Reset visual field prompt information is displayed in the viewfinder, and the reset visual field prompt information is used to prompt the subject to input the reset visual field control instruction.
  12. 如权利要求8-11任一项所述的虚拟环境中的观影画面调整方法,其中,所述基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面,包括:The method for adjusting the viewing picture in a virtual environment according to any one of claims 8 to 11, wherein the determination of the video viewing area based on the current drag position of the video viewing area in the virtual environment is The video images displayed in the shadow area include:
    基于所述取景框位于所述虚拟环境中的当前拖拽位置,确定所述取景框显示的视频画面为所述全景视频中所述当前拖拽位置对应的视频贴图,其中,所述取景框在所述当前拖拽位置显示的视频画面与所述取景框在拖拽起始位置显示的视频画面不同。Based on the current drag position of the viewfinder frame in the virtual environment, it is determined that the video picture displayed by the viewfinder frame is the video map corresponding to the current drag position in the panoramic video, wherein the viewfinder frame is in The video picture displayed at the current dragging position is different from the video picture displayed at the dragging starting position of the viewfinder.
  13. 如权利要求1-12任一项所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting the viewing picture in the virtual environment according to any one of claims 1 to 12, further comprising:
    在拖拽过程中,隐藏所述射线光标,并显示位于所述视频观影区上的所述射线光标的光标焦点。During the dragging process, the ray cursor is hidden, and the cursor focus of the ray cursor located on the video viewing area is displayed.
  14. 如权利要求1-13任一项所述的虚拟环境中的观影画面调整方法,还包括:The method for adjusting the viewing picture in the virtual environment according to any one of claims 1-13, further comprising:
    若所述拖拽控制信息为基于对象操控交互设备的按键生成的拖拽控制信息,则在响应于所述拖拽控制信息时,向所述交互设备发送震动提示信息,所述震动提示信息用于指示所述交互设备发生震动,以提示拖拽操作被触发。If the drag control information is drag control information generated based on the keys of the object manipulation interactive device, then in response to the drag control information, vibration prompt information is sent to the interactive device, and the vibration prompt information is Instructing the interactive device to vibrate to prompt that the drag operation is triggered.
  15. 如权利要求1-14任一项所述的虚拟环境中的观影画面调整方法,还包括: The method for adjusting the viewing picture in the virtual environment according to any one of claims 1-14, further comprising:
    若所述拖拽控制信息为基于对象的裸手手势生成的拖拽控制信息,则在响应于所述拖拽控制信息时,发出语音提示信息,所述语音提示信息用于提示拖拽操作被触发。If the drag control information is drag control information generated based on the bare-hand gesture of the object, then in response to the drag control information, voice prompt information is issued, and the voice prompt information is used to prompt that the drag operation is trigger.
  16. 一种虚拟环境中的观影画面调整装置,包括:A device for adjusting movie viewing images in a virtual environment, including:
    显示单元,用于显示虚拟环境,所述虚拟环境中呈现有射线光标和视频观影区,其中,所述射线光标指向所述视频观影区方向,且与所述视频观影区之间形成第一夹角;A display unit is used to display a virtual environment. A ray cursor and a video viewing area are presented in the virtual environment. The ray cursor points in the direction of the video viewing area and forms a space between the ray cursor and the video viewing area. The first included angle;
    控制单元,用于响应于拖拽控制信息,并基于所述射线光标的初始位置与所述第一夹角,拖拽所述视频观影区;A control unit configured to drag the video viewing area in response to the drag control information and based on the initial position of the ray cursor and the first angle;
    确定单元,用于基于所述视频观影区位于所述虚拟环境中的当前拖拽位置,确定所述视频观影区显示的视频画面。A determining unit configured to determine the video picture displayed in the video viewing area based on the current drag position of the video viewing area in the virtual environment.
  17. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序适于处理器进行加载,以执行如权利要求1-15任一项所述的虚拟环境中的观影画面调整方法。A computer-readable storage medium stores a computer program, wherein the computer program is suitable for loading by a processor to execute the method for adjusting a viewing picture in a virtual environment as claimed in any one of claims 1-15.
  18. 一种虚拟现实设备,包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行权利要求1-15任一项所述的虚拟环境中的观影画面调整方法。A virtual reality device, including a processor and a memory, wherein a computer program is stored in the memory, and the processor is used to execute any one of claims 1-15 by calling the computer program stored in the memory. The method for adjusting the viewing screen in the virtual environment described in the item.
  19. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-15任一项所述的虚拟环境中的观影画面调整方法。 A computer program product includes a computer program, wherein when the computer program is executed by a processor, the method for adjusting a viewing image in a virtual environment according to any one of claims 1 to 15 is implemented.
PCT/CN2023/116228 2022-09-20 2023-08-31 Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device WO2024060959A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211146244.1 2022-09-20
CN202211146244.1A CN117784915A (en) 2022-09-20 2022-09-20 Method and device for adjusting video watching picture in virtual environment, storage medium and equipment

Publications (1)

Publication Number Publication Date
WO2024060959A1 true WO2024060959A1 (en) 2024-03-28

Family

ID=90387539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/116228 WO2024060959A1 (en) 2022-09-20 2023-08-31 Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device

Country Status (2)

Country Link
CN (1) CN117784915A (en)
WO (1) WO2024060959A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170221180A1 (en) * 2016-01-29 2017-08-03 Colopl, Inc. Method and system for providing a virtual reality space
CN107037876A (en) * 2015-10-26 2017-08-11 Lg电子株式会社 System and the method for controlling it
CN107045389A (en) * 2017-04-14 2017-08-15 腾讯科技(深圳)有限公司 A kind of method and device for realizing the fixed controlled thing of control
CN107396077A (en) * 2017-08-23 2017-11-24 深圳看到科技有限公司 Virtual reality panoramic video stream projecting method and equipment
CN107977083A (en) * 2017-12-20 2018-05-01 北京小米移动软件有限公司 Operation based on VR systems performs method and device
US20180131920A1 (en) * 2016-11-08 2018-05-10 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
US20200225830A1 (en) * 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Near interaction mode for far virtual object
WO2020204594A1 (en) * 2019-04-04 2020-10-08 주식회사 코믹스브이 Virtual reality device and method for controlling same
CN113286138A (en) * 2021-05-17 2021-08-20 聚好看科技股份有限公司 Panoramic video display method and display equipment
US20220150464A1 (en) * 2019-03-08 2022-05-12 Sony Group Corporation Image processing apparatus, image processing method, and image processing program

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107037876A (en) * 2015-10-26 2017-08-11 Lg电子株式会社 System and the method for controlling it
US20170221180A1 (en) * 2016-01-29 2017-08-03 Colopl, Inc. Method and system for providing a virtual reality space
US20180131920A1 (en) * 2016-11-08 2018-05-10 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
CN107045389A (en) * 2017-04-14 2017-08-15 腾讯科技(深圳)有限公司 A kind of method and device for realizing the fixed controlled thing of control
CN107396077A (en) * 2017-08-23 2017-11-24 深圳看到科技有限公司 Virtual reality panoramic video stream projecting method and equipment
CN107977083A (en) * 2017-12-20 2018-05-01 北京小米移动软件有限公司 Operation based on VR systems performs method and device
US20200225830A1 (en) * 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Near interaction mode for far virtual object
US20220150464A1 (en) * 2019-03-08 2022-05-12 Sony Group Corporation Image processing apparatus, image processing method, and image processing program
WO2020204594A1 (en) * 2019-04-04 2020-10-08 주식회사 코믹스브이 Virtual reality device and method for controlling same
CN113286138A (en) * 2021-05-17 2021-08-20 聚好看科技股份有限公司 Panoramic video display method and display equipment

Also Published As

Publication number Publication date
CN117784915A (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US11079999B2 (en) Display screen front panel of HMD for viewing by users viewing the HMD player
US10078917B1 (en) Augmented reality simulation
WO2018086224A1 (en) Method and apparatus for generating virtual reality scene, and virtual reality system
US10712900B2 (en) VR comfort zones used to inform an In-VR GUI editor
US11128984B1 (en) Content presentation and layering across multiple devices
US11806615B2 (en) Asynchronous virtual reality interactions
TW201301892A (en) Volumetric video presentation
US11695908B2 (en) Information processing apparatus and information processing method
US11900520B1 (en) Specifying effects for entering or exiting a computer-generated reality environment
JP2021002288A (en) Image processor, content processing system, and image processing method
US11694658B2 (en) Transferring a virtual object
JP2021535526A (en) Displaying device sharing and interactivity in simulated reality (SR)
CN114327700A (en) Virtual reality equipment and screenshot picture playing method
US20230215079A1 (en) Method and Device for Tailoring a Synthesized Reality Experience to a Physical Setting
WO2020206647A1 (en) Method and apparatus for controlling, by means of following motion of user, playing of video content
WO2024060959A1 (en) Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device
US11768576B2 (en) Displaying representations of environments
JP7365132B2 (en) Information processing device, display method and computer program
KR20210056414A (en) System for controlling audio-enabled connected devices in mixed reality environments
JP6442619B2 (en) Information processing device
CN111149356A (en) Method for projecting immersive audiovisual content
JP7418498B2 (en) Program, information processing device, and method
CN117666769A (en) Virtual scene interaction method and device, storage medium and equipment
CN117115237A (en) Virtual reality position switching method, device, storage medium and equipment
CN117671201A (en) Information refreshing method, device, storage medium and equipment