CN117115237A - Virtual reality position switching method, device, storage medium and equipment - Google Patents

Virtual reality position switching method, device, storage medium and equipment Download PDF

Info

Publication number
CN117115237A
CN117115237A CN202210540710.8A CN202210540710A CN117115237A CN 117115237 A CN117115237 A CN 117115237A CN 202210540710 A CN202210540710 A CN 202210540710A CN 117115237 A CN117115237 A CN 117115237A
Authority
CN
China
Prior art keywords
virtual reality
view angle
switching
target position
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210540710.8A
Other languages
Chinese (zh)
Inventor
赵文珲
贺翔
吴培培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210540710.8A priority Critical patent/CN117115237A/en
Publication of CN117115237A publication Critical patent/CN117115237A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a position switching method, a device, a storage medium and equipment for virtual reality, wherein the method comprises the following steps: acquiring a position switching instruction, wherein the position switching instruction is an instruction for indicating to switch the display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene; according to the position switching instruction, determining the space coordinate information of the target position and the field angle corresponding to the target position; switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position; according to the view angle corresponding to the target position, the view angle of the virtual object in the virtual reality scene is switched from the view angle corresponding to the first position to the view angle corresponding to the target position, and the view angle content under the view angle corresponding to the target position is displayed, so that movement and switching of any position in the virtual reality scene can be realized, the requirement of switching to different positions for watching the view angle content is met, and the user experience is improved.

Description

Virtual reality position switching method, device, storage medium and equipment
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a method, an apparatus, a storage medium, and a device for switching positions in virtual reality.
Background
With the development of Virtual Reality (VR) technology, a user may employ a VR device, such as VR glasses, to view a Virtual Reality scene.
In the related art, when experiencing a virtual reality scene, a position switch may be performed through a fixed transfer point, such as from first floor to second floor in the virtual reality scene, or from second floor to first floor, or from first floor to corridor of a movie theater. The position switching mode of the fixed transmission point can not realize the movement and switching of any position, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a position switching method, a device, a storage medium, equipment and a program product for virtual reality, which can realize the movement and switching of any position in a virtual reality scene, meet the requirement of switching to different positions for watching visual angle contents and improve user experience.
In one aspect, an embodiment of the present application provides a method for switching a position of a virtual reality, where the method includes: acquiring a position switching instruction, wherein the position switching instruction is an instruction for indicating to switch the display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene;
According to the position switching instruction, determining the space coordinate information of the target position and the view angle corresponding to the target position;
switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position;
and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
In some embodiments, before the switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, the method further includes:
detecting whether a position area to which the target position belongs and a position area to which the first position belongs are different position areas;
and if the position area to which the target position belongs and the position area to which the first position belongs are different position areas, switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
In some embodiments, the switching the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first position to the angle of view corresponding to the target position includes:
and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the position area to which the first position belongs to the view angle corresponding to the position area to which the target position belongs.
In some embodiments, after the detecting whether the location area to which the target location belongs and the location area to which the first location belongs are different location areas, further includes:
and if the position area to which the target position belongs and the position area to which the first position belongs are the same position areas, maintaining the view angle of the virtual object in the virtual reality scene as the view angle corresponding to the first position, and displaying view angle content under the view angle corresponding to the first position.
In some embodiments, when the displaying the view angle content under the view angle corresponding to the first position further includes:
determining the display proportion of view angle content under the view angle corresponding to the first position according to the position relation between the first position and the target position;
And displaying the view angle content under the view angle corresponding to the first position according to the display proportion.
In some embodiments, the positional relationship comprises a spacing between the first location and the target location;
the determining, according to the positional relationship between the first position and the target position, a display proportion of view angle content under a view angle corresponding to the first position includes:
and determining the display proportion of the visual angle content under the visual angle corresponding to the first position according to the distance between the first position and the target position and the visual direction of the virtual object.
In some embodiments, the acquiring a position switch instruction includes:
responding to the transition operation input by an object, and displaying a transition positioning mark indicating a parabola of transition and pointing to the parabola on the virtual reality scene;
responding to the moving operation aiming at the transition positioning mark, and displaying a moving track of the transition positioning mark on the virtual reality scene, wherein the starting point of the moving track is the first position;
determining an end point of the movement track as the target position in response to a confirmation operation for the end point of the movement track;
And generating the position switching instruction according to the target position.
In some embodiments, the acquiring a position switch instruction includes:
acquiring voice information;
if the voice information is detected to be the voice data which is preset, indicates the position to switch and contains the target position, generating the position switching instruction according to the voice information.
In another aspect, an embodiment of the present application provides a position switching device for virtual reality, where the device includes:
the device comprises an acquisition unit, a position switching unit and a display unit, wherein the acquisition unit is used for acquiring a position switching instruction, wherein the position switching instruction is an instruction for indicating to switch the display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene;
the determining unit is used for determining the space coordinate information of the target position and the view angle corresponding to the target position according to the position switching instruction;
a first switching unit, configured to switch a display position of the virtual object in the virtual reality scene from the first position to the target position according to spatial coordinate information of the target position;
and the second switching unit is used for switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
In another aspect, embodiments of the present application provide a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the steps in the method for position switching of virtual reality as described in any of the embodiments above.
In another aspect, an embodiment of the present application provides a virtual reality device, where the virtual reality device includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the steps in the method for switching positions of virtual reality according to any one of the embodiments above by calling the computer program stored in the memory.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the steps in the virtual reality position switching method as described in any of the embodiments above.
According to the embodiment of the application, the position switching instruction is an instruction for indicating to switch the display position of the virtual object corresponding to the current account in the virtual reality scene from the first position to the target position; according to the position switching instruction, determining the space coordinate information of the target position and the field angle corresponding to the target position; switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position; and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position. According to the embodiment of the application, the position switching is realized through the space coordinate information of the target position, the field angle switching is realized based on the field angle corresponding to the target position, the movement and switching of any position in the virtual reality scene can be realized, the requirement of switching to different positions for watching the visual angle content is met, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a first flow chart of a virtual reality position switching method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a first application scenario of a virtual reality position switching method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a second application scenario of the virtual reality position switching method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a third application scenario of a virtual reality position switching method according to an embodiment of the present application.
Fig. 5 is a second flow chart of a virtual reality position switching method according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a fourth application scenario of a virtual reality position switching method according to an embodiment of the present application.
Fig. 7 is a third flow chart of a virtual reality position switching method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a fifth application scenario of a virtual reality position switching method according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a sixth application scenario of a virtual reality position switching method according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a virtual reality position switching device according to an embodiment of the present application.
Fig. 11 is a schematic diagram of a first structure of a virtual reality device according to an embodiment of the present application.
Fig. 12 is a second schematic structural diagram of a virtual reality device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a virtual reality position switching method, a virtual reality position switching device, virtual reality equipment and a storage medium. Specifically, the position switching method of virtual reality in the embodiment of the application can be executed by the virtual reality device.
The embodiment of the application can be applied to various application scenes such as Virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR), games and the like.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are explained as follows:
virtual Reality (VR), a technology of creating and experiencing a Virtual world, generating a Virtual environment by calculation, is a multi-source information (the Virtual Reality mentioned herein at least comprises visual perception, and may further comprise auditory perception, tactile perception, motion perception, and even further comprises gustatory perception, olfactory perception, etc.), realizes the simulation of a fused and interactive three-dimensional dynamic view and entity behavior of the Virtual environment, immerses a user into the simulated Virtual Reality environment, and realizes application in various Virtual environments such as a map, a game, a video, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, repair, and the like.
Augmented reality (Augmented Reality, AR), a technique of calculating camera pose parameters of a camera in the real world (or three-dimensional world, real world) in real time during image acquisition by the camera, and adding virtual elements to the image acquired by the camera according to the camera pose parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
Mixed Reality (MR) integrates computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or a representation thereof into a simulated scenery, and in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
Enhanced virtualization (Augmented Virtuality, AV): AV scenery refers to a simulated scenery in which a computer created scenery or virtual scenery incorporates at least one sensory input from a physical scenery. The one or more sensory inputs from the physical set may be a representation of at least one feature of the physical set. For example, the virtual object may present the color of the physical element captured by the one or more imaging sensors. As another example, the virtual object may exhibit characteristics consistent with actual weather conditions in the physical scenery, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest may have virtual trees and structures, but an animal may have features that are accurately reproduced from images taken of a physical animal.
A virtual Field Of View (FOV) represents a perceived area Of a virtual environment that a user can perceive through a lens in a virtual reality device, using a Field Of View (FOV) Of the virtual Field Of View.
The virtual reality device, the terminal for realizing the virtual reality effect, may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The virtual reality device described in the embodiments of the present application may include, but is not limited to, the following types:
a computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
The embodiments of the present application provide a virtual reality position switching method, which may be executed by a terminal or a server, or may be executed by the terminal and the server together; the embodiment of the present application will be described with an example in which a virtual reality position switching method is executed by a terminal (virtual reality device).
Referring to fig. 1 to fig. 9, fig. 1, fig. 5 and fig. 7 are flowcharts of a virtual reality position switching method according to an embodiment of the present application, and fig. 2 to fig. 4, fig. 6 and fig. 8 to fig. 9 are diagrams of related application scenarios according to an embodiment of the present application, where the blank backgrounds in fig. 2 to fig. 4, fig. 6 and fig. 8 to fig. 9 may be virtual reality space layers. The method comprises the following steps:
step 110, a position switching instruction is acquired, where the position switching instruction is an instruction for indicating to switch a display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene.
Wherein, the virtual reality environment can include a virtual reality scene and at least one virtual object active in the virtual reality scene, and the virtual object can include a Player Character controlled by a user (or Player) or a Non-Player Character (NPC) controlled by a system. For example, the virtual object may also include one or more character attributes, such as skill attributes, character status attributes, etc., to provide assistance to the player, provide virtual services, increase points related to the player's performance, etc. For example, the virtual reality environment, such as a game, may also include one or more virtual obstacles, such as rails, ravines, walls, etc., in the virtual reality scene of the game to limit movement of the virtual objects, such as to limit movement of the one or more virtual objects to a particular region within the virtual reality scene. In addition, one or more indicators may be presented in the virtual reality scene to provide indication information to the player.
For example, an object (such as a user or a real player) may log in and access a virtual reality scene in a virtual reality environment by using a current account in a virtual reality device, the virtual object corresponding to the current account being a player character that the user controls through the virtual reality device.
In some embodiments, the acquiring a position switch instruction includes:
responding to the transition operation input by an object, and displaying a transition positioning mark indicating a parabola of transition and pointing to the parabola on the virtual reality scene;
responding to the moving operation aiming at the transition positioning mark, and displaying a moving track of the transition positioning mark on the virtual reality scene, wherein the starting point of the moving track is the first position;
determining an end point of the movement track as the target position in response to a confirmation operation for the end point of the movement track;
and generating the position switching instruction according to the target position.
Any one of the above-mentioned transition operation, movement operation, and confirmation operation may be triggered by the handle, or may be triggered by a gesture, or may be triggered by a multi-modal operation, which may include a combination of a gesture operation and a voice input.
For example, as shown in fig. 2, in the virtual reality scene 10, a viewing zone and a stage zone may be displayed, wherein the virtual object 11 is generally located in the viewing zone, for example, a display position of the virtual object 11 currently in the virtual reality scene 10 is a first position, such as a 4 th row F seat in the viewing zone.
For example, if the virtual reality device is connected to the handle, the operation instruction may be input through the handle. As shown in fig. 3, a virtual handle 12 may also be displayed in the virtual reality scene 10. For example, in response to a transition operation input by an object (user) through a handle, a parabola 13 indicating transition and a transition positioning mark 14 to which the parabola 13 points may be displayed on the virtual reality scene 11. In response to an object (user) triggering a movement operation for the transition positioning mark 14 through a handle, a movement locus 15 of the transition positioning mark 14 is displayed on the virtual reality scene 11, the start point of the movement locus 15 being a first position, for example, before responding to the movement operation, the transition positioning mark 14 is displayed in the first position, which is the 4 th row F seats in the viewing area. The end point of the movement locus 15 is determined as the target position in response to the object (user) triggering a confirmation operation for the end point of the movement locus 15 through the handle. For example, if the end point of the movement locus 15 is the row 2C of seats in the observation area, the row 2C of seats is determined as the target position. Then, according to the target position, a position switching instruction is generated, wherein the position switching instruction is used for indicating to switch the display position of the virtual object 11 corresponding to the current account number in the virtual reality scene 10 from the first position to the target position, namely, indicating to switch the display position of the virtual object 11 in the virtual reality scene 10 from the 4 th row F seat to the 2 nd row C seat in the viewing area.
For example, a parabola 13 and a transition positioning mark 14 shown by a broken line in fig. 3 represent a display state before the transition positioning mark 14 moves; the parabola 13 and the transition positioning mark 14 shown by solid lines in fig. 3 represent the display state after the transition positioning mark 14 is moved. For example, in the virtual reality scene 10, the transition positioning mark 14 may be an aperture effect, a light projection effect, or other marks.
For example, the detected gesture of the object may be matched with a preset gesture to determine a responsive operation instruction, so as to trigger a transition operation, a movement operation, a confirmation operation, and the like.
In some embodiments, the acquiring a position switch instruction includes: acquiring voice information; if the voice information is detected to be the voice data which is preset, indicates the position to switch and contains the target position, generating the position switching instruction according to the voice information.
For example, the user inputs voice information "please switch the position to row 2C, since keywords for indicating the position switch, such as" position switch "," switch ", etc., are stored in the virtual reality device; at this time, the virtual reality device recognizes the voice information "please switch the position to the row 2C seat", and if the voice information is recognized to include the keyword "position switch" and to have the specific position information "row 2C seat", generates a position switch instruction according to the voice information, where the position switch instruction is an instruction to switch the display position of the virtual object 11 in the virtual reality scene 10 from the first position (row 4F seat) to the target position (row 2C seat) in the viewing area.
For example, the first location and the target location may be located in the same gui in the virtual reality scene, and fig. 3 may be referred to as an application scene diagram for generating the location switching instruction.
For example, the first location and the target location may be located within different graphical user interfaces in the virtual reality scene. For example, the first position is located on the first graphical user interface, the target position is located on the second graphical user interface, when the moving operation is executed, page switching of the graphical user interface can be achieved through triggering a fixed transmission point, after the moving operation is switched to the second graphical user interface, the moving operation is continued to the target position, a moving track is displayed, the end point of the moving track is determined to be the target position in response to confirmation operation on the end point of the moving track, and then a position switching instruction for indicating that the display position of the virtual object corresponding to the current account number in the virtual reality scene is switched from the first position of the first graphical user interface to the target position of the second graphical user interface is generated.
And step 120, determining the space coordinate information of the target position and the view angle corresponding to the target position according to the position switching instruction.
The angle of view represents a distribution range of viewing angles that the lens has when sensing the virtual environment. For example, the angle of view of the virtual reality device may represent the range of distribution of viewing angles that the human eye has when the virtual environment is perceived through the lens of the virtual reality device; for example, in a mobile terminal provided with a camera, the field angle of the camera is a distribution range of the viewing angle that the camera has when sensing the real environment to shoot.
For example, for different positions in the virtual reality scene 10, configuration information is preset when the virtual environment is constructed, and the configuration information may include spatial coordinate information corresponding to the different positions and view angles corresponding to the different positions. And determining the space coordinate information of the target position and the view angle corresponding to the target position from preset configuration information according to the target position indicated in the position switching instruction.
And step 130, switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position.
And responding to the position switching instruction, positioning the coordinates of the target position in the virtual reality scene according to the space coordinate information of the target position, and switching the display position of the virtual object in the virtual reality scene from the first position to the target position. As shown in fig. 4, in response to the position switching instruction, the display position of the virtual object 11 in the virtual reality scene 10 is switched from the first position (row 4F seats) to the target position (row 2C seats).
And 140, switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
Wherein the view content comprises a video stream and/or a live stream.
For example, fig. 2 shows an application scenario schematic before position switching, and fig. 4 shows an application scenario schematic after position switching. In response to the position switching instruction, the display position of the virtual object 11 in the virtual reality scene 10 is switched from the first position (4 th row F seats) to the target position (2 nd row C seats), and the angle of view of the virtual object 11 in the virtual reality scene 10 is switched from the angle of view a (shown in fig. 2) corresponding to the first position to the angle of view b (shown in fig. 4) corresponding to the target position, and the view content at the angle of view b corresponding to the target position is displayed.
In some embodiments, as shown in fig. 5, step 150 may also be performed prior to step 140:
step 150, detecting whether the position area to which the target position belongs and the position area to which the first position belongs are different position areas; if yes, go to step 140; if not, go to step 160.
For example, the viewing area of a virtual reality scene may be divided into regions, which may be of a certain size, and the video stream may remain unchanged as long as the video stream moves within the same region. For example, if the video stream is changed from the rear area to the front area and the video stream belongs to the cross-area switching, the operation of switching the video stream is activated.
Specifically, if the target position and the first position before movement are different in position area, switching the view angle and video stream corresponding to the position area where the target position is located; if the target position and the first position before moving belong to the same position area, the angle of view and the video stream do not need to be switched. For example, in order to enhance the user experience, if the target position and the first position before moving belong to the same position area, fine adjustment may be performed on the displayed view angle content, such as zooming in, zooming out, and the like.
For example, as shown in fig. 3 or 6, the viewing area of the virtual reality scene 10 may include a first location area, a second location area, a third location area, and a fourth location area. These four location areas are all viewing areas. Wherein the first position area comprises an A/B/C seat of the 1 st row and an A/B/C seat of the 2 nd row; the second position area comprises a D/E/F seat of a 1 st row and a D/E/F seat of a 2 nd row; the third position area comprises an A/B/C seat of the 3 rd row and an A/B/C seat of the 4 th row; the fourth location area includes the row 3D/E/F seats and the row 4D/E/F seats. For example, different location areas have different angles of view, and any location belonging to the same location area has the same angle of view.
In step 140, if the location area to which the target location belongs and the location area to which the first location belongs are different location areas, the angle of view of the virtual object in the virtual reality scene is switched from the angle of view corresponding to the first location to the angle of view corresponding to the target location, and the view content under the angle of view corresponding to the target location is displayed.
In some embodiments, the switching the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first position to the angle of view corresponding to the target position includes:
and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the position area to which the first position belongs to the view angle corresponding to the position area to which the target position belongs.
For example, as shown in fig. 3, if the first position is the 4 th row F seats, the target position is the 2 nd row C seats, and it is detected that the position area to which the 2 nd row C seats of the target position belong is the first position area, and the position area to which the first position belongs is the fourth position area, then the position area to which the target position belongs and the position area to which the first position belongs are different position areas, step 140 is executed to switch the angle of view of the virtual object 11 in the virtual reality scene 10 from the angle of view a (shown in fig. 2) corresponding to the first position to the angle of view b (shown in fig. 4) corresponding to the target position, and display the view angle content under the angle of view b corresponding to the target position.
And step 160, maintaining the view angle of the virtual object in the virtual reality scene as the view angle corresponding to the first position, and displaying the view angle content under the view angle corresponding to the first position.
For example, as shown in fig. 6, if the first position is the 4 th row F seats, the target position is the 3 rd row D seats, and it is detected that the position area to which the 3 rd row D seats of the target position belongs is the fourth position area, and the position area to which the first position belongs is the fourth position area, then the position area to which the target position belongs and the position area to which the first position belongs are the same position area, if otherwise, step 160 is executed, the view angle of the virtual object 11 in the virtual reality scene 10 is maintained to be the view angle corresponding to the first position, and the view angle content under the view angle corresponding to the first position is displayed.
Specifically, as shown in fig. 6, if the position area to which the target position belongs and the position area to which the first position belongs are the same position areas, the target position and the first position each have a view angle corresponding to the fourth position area, the view angle of the virtual object in the virtual reality scene is maintained at the view angle corresponding to the first position, and the view angle content under the view angle corresponding to the first position is displayed.
In some embodiments, as shown in fig. 7, step 160 may be implemented by steps 161 to 163, specifically:
step 161, maintaining the angle of view of the virtual object in the virtual reality scene to be the angle of view corresponding to the first position.
For example, since the position area to which the target position belongs and the position area to which the first position belongs are the same position areas, the target position and the first position each have the same angle of view corresponding to the position area, and the angle of view of the virtual object in the virtual reality scene can be maintained at the angle of view corresponding to the first position.
Step 162, determining a display proportion of the view angle content under the view angle corresponding to the first position according to the position relation between the first position and the target position.
In some embodiments, the positional relationship comprises a spacing between the first location and the target location; the determining, according to the positional relationship between the first position and the target position, a display proportion of view angle content under a view angle corresponding to the first position includes:
and determining the display proportion of the visual angle content under the visual angle corresponding to the first position according to the distance between the first position and the target position and the visual direction of the virtual object.
For example, if it is determined that the viewing distance of the target position is closer according to the distance between the first position and the target position and the visual direction of the virtual object, the rendered picture needs to be enlarged, and it is determined that the display proportion of the view angle content under the view angle corresponding to the first position is a preset magnification.
For example, if it is determined that the viewing distance of the target position is a little longer according to the distance between the first position and the target position and the visual direction of the virtual object, the rendered picture needs to be reduced, and the display ratio of the view angle content under the view angle corresponding to the first position is determined to be a preset reduction multiple.
And step 163, displaying the view angle content under the view angle corresponding to the first position according to the display proportion.
For example, if the display ratio is a preset magnification, the view angle content under the view angle corresponding to the first position displayed is the view angle content of the enlarged picture.
For example, if the display scale is a preset reduction multiple, the view angle content under the view angle corresponding to the first position displayed is the view angle content with reduced picture.
For example, in a live broadcast scenario, the virtual reality scenario may be a live broadcast room, a plurality of interaction fields may be provided in the virtual reality scenario, different interaction fields belong to different location areas, and a virtual object corresponding to the current account may shuttle between the different interaction fields.
Wherein a setup interface may be provided for a user to set up relevant parameters of the interaction field. For example, the user sets up the module of the interaction field through setting up the interface, and the information such as the movement boundary, interaction field position, interaction field size, screen information that each interaction field corresponds to, and whether there is a single mode.
The screen information corresponding to the interaction field may include a screen type (180 degrees/360 degrees), a live stream corresponding to the screen, a screen position, a screen size, and a screen angle. The screen position can be determined by the selected position, and the screen angle can be determined by the view angle corresponding to the selected position.
The shuttling mode of the virtual object between the interaction fields can be realized through fixed shuttling points; the free shuttle can be realized through any position in the position switching method of virtual reality provided by the embodiment of the application, and the free shuttle mode provided by the embodiment of the application can be called teleport transmission.
For example, in a live scene, a plurality of interaction fields may be set, where each interaction field corresponds to a location area, and has the same field angle and live streaming content at any location in the same location area. For example, four interaction fields are provided, in particular three under-counter interaction fields and one on-counter interaction field. For example, the interactive fields under the stations may be transmitted to each other via teleport. For example, the interactive field on the table and the interactive field under the table can be switched by a bracelet. For example, the desktop interaction field may default to a single person mode.
Each interaction field corresponds to one home screen, e.g. four interaction fields correspond to four different 180 home screens.
For example, the application scene setting of the interaction field can be realized after the original multi-user interaction document is modified. For example, the machine position, the screen, the position area of the interaction field, and the like in the virtual reality scene need to be matched and calibrated with the machine position, the screen, the viewing area, and the like in the live shooting scene, so that the visual angle content of the virtual reality scene and the live shooting scene combined with reality is seamlessly connected.
For example, the virtual reality scene may further include an interaction field, a home screen and a machine position, wherein the interaction field, the home screen and the machine position have a one-to-one correspondence.
For example, a secondary screen may also be included in the virtual reality scene. The auxiliary screen may not be bound to the interaction field, and may be part of the stage design model.
For example, as shown in fig. 8, the virtual reality scenario 10 in the live scenario is provided with four interactive fields, specifically three under-table interactive fields 102 (including a first under-table interactive field 102A, a second under-table interactive field 102B, and an under-table interactive field 102C), and one on-table interactive field 101.
For example, if the virtual reality device is connected to the handle, the operation instruction may be input through the handle. As shown in fig. 7, a virtual handle 12 may also be displayed in the virtual reality scene 10. For example, in response to a transition operation input by an object (user) through a handle, a parabola 13 indicating transition and a transition positioning mark 14 to which the parabola 13 points may be displayed on the virtual reality scene 11. In response to an object (user) triggering a movement operation for the transition positioning mark 14 through a handle, a movement track 15 of the transition positioning mark 14 is displayed on the virtual reality scene 11, the start point of the movement track 15 being a first position, for example, before responding to the movement operation, the transition positioning mark 14 is displayed in the first position, which is the third under-floor interaction field 102C. The end point of the movement locus 15 is determined as the target position in response to the object (user) triggering a confirmation operation for the end point of the movement locus 15 through the handle. For example, if the end point of the movement track 15 is the first under-table interaction field 102A, the first under-table interaction field 102A is determined as the target position. Then, according to the target position, a position switching instruction is generated, where the position switching instruction is to instruct to switch the display position of the virtual object 11 corresponding to the current account number in the virtual reality scene 10 from the first position to the target position, that is, instruct to switch the display position of the virtual object 11 in the virtual reality scene 10 from the third under-platform interaction field 102C to the first under-platform interaction field 102A. For example, a parabola 13 and a transition positioning mark 14 shown by a broken line in fig. 7 represent a display state before the transition positioning mark 14 is moved; the parabola 13 and the transition positioning mark 14 shown by solid lines in fig. 7 represent the display state after the transition positioning mark 14 is moved.
For example, as shown in fig. 9, in response to the position switching instruction, the display position of the virtual object 11 in the virtual reality scene 10 is switched from the third under-interaction field 102C to the first under-interaction field 102A, and the angle of view of the virtual object 11 in the virtual reality scene 10 is switched from the angle of view corresponding to the third under-interaction field 102C to the angle of view corresponding to the first under-interaction field 102A, and live streaming content at the angle of view corresponding to the first under-interaction field 102A is displayed.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
According to the embodiment of the application, the position switching instruction is an instruction for indicating to switch the display position of the virtual object corresponding to the current account in the virtual reality scene from the first position to the target position; according to the position switching instruction, determining the space coordinate information of the target position and the field angle corresponding to the target position; switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position; and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position. According to the embodiment of the application, the position switching is realized through the space coordinate information of the target position, the field angle switching is realized based on the field angle corresponding to the target position, the movement and switching of any position in the virtual reality scene can be realized, the requirement of switching to different positions for watching the visual angle content is met, and the user experience is improved.
In order to facilitate better implementation of the virtual reality position switching method of the embodiment of the application, the embodiment of the application also provides a virtual reality position switching device. Referring to fig. 10, fig. 10 is a schematic structural diagram of a virtual reality position switching device according to an embodiment of the application. The virtual reality position switching apparatus 200 may include:
an obtaining unit 210, configured to obtain a position switching instruction, where the position switching instruction is an instruction indicating to switch a display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene;
a determining unit 220, configured to determine, according to the position switching instruction, spatial coordinate information of the target position and a view angle corresponding to the target position;
a first switching unit 230, configured to switch a display position of the virtual object in the virtual reality scene from the first position to the target position according to spatial coordinate information of the target position;
the second switching unit 240 is configured to switch, according to the field angle corresponding to the target position, the field angle of the virtual object in the virtual reality scene from the field angle corresponding to the first position to the field angle corresponding to the target position, and display the view angle content under the field angle corresponding to the target position.
In some embodiments, the apparatus further comprises:
a detecting unit 250 configured to detect whether a location area to which the target location belongs and a location area to which the first location belongs are different location areas;
the second switching unit 240 is configured to switch, if the location area to which the target location belongs and the location area to which the first location belongs are different location areas, the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first location to the angle of view corresponding to the target location, and display the view content under the angle of view corresponding to the target location.
In some embodiments, the second switching unit 240 is specifically configured to, when switching the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first position to the angle of view corresponding to the target position: and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the position area to which the first position belongs to the view angle corresponding to the position area to which the target position belongs.
In some embodiments, the second switching unit 240 is further configured to maintain the view angle of the virtual object in the virtual reality scene as the view angle corresponding to the first position if the location area to which the target position belongs is the same location area as the location area to which the first position belongs, and display the view angle content under the view angle corresponding to the first position.
In some embodiments, the second switching unit 240 is further configured to, when displaying the view angle content at the view angle corresponding to the first position: determining the display proportion of view angle content under the view angle corresponding to the first position according to the position relation between the first position and the target position; and displaying the view angle content under the view angle corresponding to the first position according to the display proportion.
In some embodiments, the positional relationship comprises a spacing between the first location and the target location; the second switching unit 240 may be configured to, when determining the display ratio of the view angle content at the view angle corresponding to the first position according to the positional relationship between the first position and the target position: and determining the display proportion of the visual angle content under the visual angle corresponding to the first position according to the distance between the first position and the target position and the visual direction of the virtual object.
In some embodiments, the obtaining unit 210 may be configured to: responding to the transition operation input by an object, and displaying a transition positioning mark indicating a parabola of transition and pointing to the parabola on the virtual reality scene; responding to the moving operation aiming at the transition positioning mark, and displaying a moving track of the transition positioning mark on the virtual reality scene, wherein the starting point of the moving track is the first position; determining an end point of the movement track as the target position in response to a confirmation operation for the end point of the movement track; and generating the position switching instruction according to the target position.
In some embodiments, the obtaining unit 210 may be configured to: acquiring voice information; if the voice information is detected to be the voice data which is preset, indicates the position to switch and contains the target position, generating the position switching instruction according to the voice information.
Each unit in the above-described virtual reality position switching apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above units may be embedded in hardware or independent from a processor in the virtual reality device, or may be stored in software in a memory in the virtual reality device, so that the processor invokes and executes operations corresponding to the above units.
The virtual reality position switching device 200 may be integrated in a terminal or a server having a memory and a processor mounted thereon and having an arithmetic capability, or the virtual reality position switching device 200 may be the terminal or the server.
In some embodiments, the present application further provides a virtual reality device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method embodiments described above when executing the computer program.
As shown in fig. 11, fig. 11 is a schematic structural diagram of a virtual reality device according to an embodiment of the present application, and the virtual reality device 300 may be generally provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or contact lenses for realizing visual perception and other forms of perception, but the form of realizing the virtual reality device is not limited thereto, and may be further miniaturized or enlarged as required. The virtual reality device 300 may include, but is not limited to, the following:
the detection module 301: various sensors are used to detect user operation commands and act on the virtual environment, such as to update the images displayed on the display screen along with the user's line of sight, to achieve user interaction with the virtual and scene, such as to update real content based on the detected direction of rotation of the user's head.
Feedback module 302: receiving data from the sensor, providing real-time feedback to the user; wherein the feedback module 302 may be for displaying a graphical user interface, such as displaying a virtual environment on the graphical user interface.
Sensor 303: on one hand, an operation command from a user is accepted and acted on the virtual environment; on the other hand, the result generated after the operation is provided to the user in the form of various feedback.
Control module 304: the sensors and various input/output devices are controlled, including obtaining user data (e.g., motion, speech) and outputting sensory data, such as images, vibrations, temperature, sounds, etc., to affect the user, virtual environment, and the real world.
Modeling module 305: constructing a three-dimensional model of a virtual environment may also include various feedback mechanisms such as sound, touch, etc. in the three-dimensional model.
In the embodiment of the present application, a position switching instruction may be acquired by the detection module 301, where the position switching instruction is an instruction for indicating to switch a display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene; the control module 304 determines the space coordinate information of the target position and the view angle corresponding to the target position according to the position switching instruction; and
the control module 304 switches the display position of the virtual object in the virtual reality scene from the first position to the target position according to the spatial coordinate information of the target position, switches the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displays the view angle content under the view angle corresponding to the target position through the feedback module 302. Wherein, a three-dimensional model of the virtual environment can be constructed by the modeling module 305, and view angle contents under the view angle corresponding to the target position are rendered and generated.
In some embodiments, as shown in fig. 12, fig. 12 is another schematic structural diagram of a virtual reality device provided in an embodiment of the present application, and the virtual reality device 300 further includes: radio frequency circuitry 306, audio circuitry 307, and power supply 308. The control module 304 is electrically connected to the rf circuit 306, the audio circuit 307, and the power supply 308, respectively. Those skilled in the art will appreciate that the virtual reality device structure shown in fig. 11 or 12 does not constitute a limitation of the virtual reality device, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The radio frequency circuitry 306 may be configured to receive and transmit radio frequency signals to and from a network device or other virtual reality device via wireless communication to and from the network device or other virtual reality device.
The audio circuit 307 may be used to provide an audio interface between the user and the virtual reality device through speakers, microphones. The audio circuit 307 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 307 and converted into audio data, which are processed by the audio data output processor 301 for transmission to, for example, another virtual reality device via the radio frequency circuit 306, or which are output to a memory for further processing. The audio circuit 307 may also include an ear bud jack to provide communication of the peripheral headphones with the virtual reality device.
The power supply 308 is used to power the various components of the virtual reality device 300.
Although not shown in fig. 11 or 12, the virtual reality device 300 may further include a camera, a wireless fidelity module, a bluetooth module, an input module, etc., which are not described herein.
The present application also provides a computer-readable storage medium storing a computer program. The computer readable storage medium may be applied to a virtual reality device, and the computer program causes the virtual reality device to execute a corresponding flow in the position switching method of virtual reality in the embodiment of the present application, which is not described herein for brevity.
The present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the virtual reality device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the virtual reality device executes a corresponding flow in the virtual reality position switching method in the embodiment of the present application, which is not described herein for brevity.
The present application also provides a computer program comprising a computer program stored in a computer readable storage medium. The processor of the virtual reality device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the virtual reality device executes a corresponding flow in the virtual reality position switching method in the embodiment of the present application, which is not described herein for brevity.
It should be appreciated that the processor of an embodiment of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that the above memory is illustrative but not restrictive, and for example, the memory in the embodiments of the present application may be Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), direct RAM (DR RAM), and the like. That is, the memory in embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for causing a virtual reality device (which may be a personal computer, server) to perform all or part of the steps of the method according to various embodiments of the application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for switching positions of virtual reality, the method comprising:
acquiring a position switching instruction, wherein the position switching instruction is an instruction for indicating to switch the display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene;
according to the position switching instruction, determining the space coordinate information of the target position and the view angle corresponding to the target position;
switching the display position of the virtual object in the virtual reality scene from the first position to the target position according to the space coordinate information of the target position;
and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
2. The method for switching the position of the virtual reality according to claim 1, further comprising, before the switching the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first position to the angle of view corresponding to the target position according to the angle of view corresponding to the target position:
Detecting whether a position area to which the target position belongs and a position area to which the first position belongs are different position areas;
and if the position area to which the target position belongs and the position area to which the first position belongs are different position areas, switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
3. The method for switching the position of the virtual reality according to claim 2, wherein the switching the angle of view of the virtual object in the virtual reality scene from the angle of view corresponding to the first position to the angle of view corresponding to the target position includes:
and switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the position area to which the first position belongs to the view angle corresponding to the position area to which the target position belongs.
4. The method according to claim 2, further comprising, after the detecting whether the location area to which the target location belongs and the location area to which the first location belongs are different location areas:
And if the position area to which the target position belongs and the position area to which the first position belongs are the same position areas, maintaining the view angle of the virtual object in the virtual reality scene as the view angle corresponding to the first position, and displaying view angle content under the view angle corresponding to the first position.
5. The method for switching a position of virtual reality according to claim 4, further comprising, when displaying view angle content at a view angle corresponding to the first position:
determining the display proportion of view angle content under the view angle corresponding to the first position according to the position relation between the first position and the target position;
and displaying the view angle content under the view angle corresponding to the first position according to the display proportion.
6. The virtual reality position switching method of claim 5, wherein the positional relationship includes a spacing between the first position and the target position;
the determining, according to the positional relationship between the first position and the target position, a display proportion of view angle content under a view angle corresponding to the first position includes:
And determining the display proportion of the visual angle content under the visual angle corresponding to the first position according to the distance between the first position and the target position and the visual direction of the virtual object.
7. The method for switching a position of a virtual reality according to any one of claims 1-6, wherein the obtaining a position switching instruction includes:
responding to the transition operation input by an object, and displaying a transition positioning mark indicating a parabola of transition and pointing to the parabola on the virtual reality scene;
responding to the moving operation aiming at the transition positioning mark, and displaying a moving track of the transition positioning mark on the virtual reality scene, wherein the starting point of the moving track is the first position;
determining an end point of the movement track as the target position in response to a confirmation operation for the end point of the movement track;
and generating the position switching instruction according to the target position.
8. The method for switching a position of a virtual reality according to any one of claims 1-6, wherein the obtaining a position switching instruction includes:
acquiring voice information;
if the voice information is detected to be the voice data which is preset, indicates the position to switch and contains the target position, generating the position switching instruction according to the voice information.
9. A virtual reality position switching apparatus, the apparatus comprising:
the device comprises an acquisition unit, a position switching unit and a display unit, wherein the acquisition unit is used for acquiring a position switching instruction, wherein the position switching instruction is an instruction for indicating to switch the display position of a virtual object corresponding to a current account from a first position to a target position in a virtual reality scene;
the determining unit is used for determining the space coordinate information of the target position and the view angle corresponding to the target position according to the position switching instruction;
a first switching unit, configured to switch a display position of the virtual object in the virtual reality scene from the first position to the target position according to spatial coordinate information of the target position;
and the second switching unit is used for switching the view angle of the virtual object in the virtual reality scene from the view angle corresponding to the first position to the view angle corresponding to the target position according to the view angle corresponding to the target position, and displaying view angle content under the view angle corresponding to the target position.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the steps of the virtual reality position switching method according to any of claims 1-8.
11. A virtual reality device comprising a processor and a memory, the memory having stored therein a computer program for executing the steps of the virtual reality position switching method of any of claims 1-8 by invoking the computer program stored in the memory.
12. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the virtual reality position switching method of any of claims 1-8.
CN202210540710.8A 2022-05-17 2022-05-17 Virtual reality position switching method, device, storage medium and equipment Pending CN117115237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210540710.8A CN117115237A (en) 2022-05-17 2022-05-17 Virtual reality position switching method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210540710.8A CN117115237A (en) 2022-05-17 2022-05-17 Virtual reality position switching method, device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN117115237A true CN117115237A (en) 2023-11-24

Family

ID=88811648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210540710.8A Pending CN117115237A (en) 2022-05-17 2022-05-17 Virtual reality position switching method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN117115237A (en)

Similar Documents

Publication Publication Date Title
US9858643B2 (en) Image generating device, image generating method, and program
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
KR101576294B1 (en) Apparatus and method to perform processing a sound in a virtual reality system
US20190355170A1 (en) Virtual reality content display method and apparatus
US10681276B2 (en) Virtual reality video processing to compensate for movement of a camera during capture
CN110637274B (en) Information processing apparatus, information processing method, and program
JP2019175323A (en) Simulation system and program
CN109791436B (en) Apparatus and method for providing virtual scene
EP3665656B1 (en) Three-dimensional video processing
CN117115237A (en) Virtual reality position switching method, device, storage medium and equipment
US20190089899A1 (en) Image processing device
CN114416237A (en) Display state switching method, device and system, electronic equipment and storage medium
CN116194792A (en) Connection evaluation system
EP3623908A1 (en) A system for controlling audio-capable connected devices in mixed reality environments
WO2024060959A1 (en) Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device
CN114612637B (en) Scene picture display method and device, computer equipment and storage medium
US20240205513A1 (en) Video display system, information processing device, information processing method, and recording medium
JP7510723B1 (en) Character display control system, character display control method and program
CN117476014A (en) Audio processing method, device, storage medium and equipment
US20240013404A1 (en) Image processing method and apparatus, electronic device, and medium
CN118161857A (en) Task display method, device, storage medium and equipment
CN117666769A (en) Virtual scene interaction method and device, storage medium and equipment
CN118229934A (en) Virtual object display method, device, storage medium, equipment and program product
CN117130465A (en) Parameter setting method, device, equipment and storage medium based on XR equipment
CN117115395A (en) Fusion method, device, equipment and medium of virtual reality and real scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination