WO2020114176A1 - 对虚拟环境进行观察的方法、设备及存储介质 - Google Patents
对虚拟环境进行观察的方法、设备及存储介质 Download PDFInfo
- Publication number
- WO2020114176A1 WO2020114176A1 PCT/CN2019/115623 CN2019115623W WO2020114176A1 WO 2020114176 A1 WO2020114176 A1 WO 2020114176A1 CN 2019115623 W CN2019115623 W CN 2019115623W WO 2020114176 A1 WO2020114176 A1 WO 2020114176A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- virtual object
- virtual
- observation
- detection
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Definitions
- Embodiments of the present application relate to the field of virtual environments, and in particular, to a method, device, and storage medium for observing a virtual environment.
- an application developed by a virtual engine is usually installed.
- the display of the display elements such as virtual objects, virtual objects, or the ground is implemented in a model manner.
- virtual objects include virtual houses, virtual water towers, virtual hillsides, virtual grasslands, and virtual furniture. Users can control virtual objects to perform virtual operations in a virtual environment.
- the virtual environment is observed through the camera model with the virtual object as the observation center, and the camera model is a three-dimensional model that is separated from the virtual object by a certain distance in the virtual environment and the shooting direction faces the virtual object .
- the virtual environment usually includes different observation scenes, such as dim scenes, bright scenes, indoor scenes, or outdoor scenes
- observing the virtual environment in the above observation mode may result in observation modes under multiple observation scenes
- Incompatible issues such as: In the indoor scene, the observation method has a greater probability of being blocked by the indoor furniture. In the dim scene, the observation method cannot clearly present the virtual items in the virtual environment.
- the above incompatibility issues will affect the combat During the process, the user needs to adjust the viewing angle of the virtual object multiple times or adjust the screen display brightness of the terminal itself.
- Embodiments of the present application provide a method, device, and storage medium for observing a virtual environment.
- a method for observing a virtual environment is executed by a terminal.
- the method includes:
- the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
- a device for observing a virtual environment includes:
- a display module configured to display a first environment screen of an application, where the first environment screen includes virtual objects in a first scene, and the first environment screen is displayed in a first observation manner in the virtual environment A picture of observation in the virtual environment;
- the receiving module is used to receive a mobile operation, the mobile operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes ,
- the observation scene corresponds to at least one observation mode for observing the virtual environment;
- An adjustment module configured to adjust the first observation mode to a second observation mode according to the movement operation, wherein the first observation mode corresponds to the first scene, and the second observation mode Corresponding to the two scenarios;
- the display module is further configured to display a second environment screen of an application, the second environment screen includes the virtual object in a second scene, and the second environment screen is in the virtual environment The screen for observing the virtual environment in the second observation mode.
- a terminal is characterized in that the terminal includes a processor and a memory, and the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor is caused to perform the following steps :
- the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
- a non-volatile computer-readable storage medium that stores computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform the following steps:
- the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
- FIG. 1 is a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
- FIG. 2 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
- FIG. 3 is a schematic diagram of an exemplary embodiment of the present application providing a camera model to observe a virtual environment
- FIG. 4 is a flowchart of a method for observing a virtual environment provided by an exemplary embodiment of the present application
- FIG. 5 is a schematic diagram of observing a virtual environment in indoor and outdoor scenes in the related art based on the embodiment shown in FIG. 4;
- FIG. 6 is a schematic diagram of observing a virtual environment in indoor and outdoor scenes in the present application based on the embodiment shown in FIG. 4;
- FIG. 7 is a schematic diagram of observing a virtual environment in another indoor and outdoor scene in this application based on the embodiment shown in FIG. 4;
- FIG. 8 is a schematic diagram of observing a virtual environment in an indoor and outdoor scene in another related art based on the embodiment shown in FIG. 4;
- FIG. 9 is a schematic diagram of observing a virtual environment in another indoor and outdoor scene in this application based on the embodiment shown in FIG. 4;
- FIG. 10 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
- FIG. 11 is a schematic diagram of vertical ray detection based on the embodiment shown in FIG. 10;
- FIG. 12 is a schematic diagram of another vertical ray detection based on the embodiment shown in FIG. 10;
- FIG. 13 is a schematic diagram of another vertical ray detection based on the embodiment shown in FIG. 10;
- FIG. 14 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
- FIG. 15 is a schematic diagram of horizontal radiation detection based on the embodiment shown in FIG. 14;
- 16 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
- FIG. 17 is a structural block diagram of an apparatus for observing a virtual environment provided by an exemplary embodiment of the present application.
- FIG. 18 is a structural block diagram of an apparatus for observing a virtual environment provided by another exemplary embodiment of the present application.
- FIG. 19 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
- Virtual environment A virtual environment that is displayed (or provided) when an application is running on a terminal.
- the virtual environment may be a simulation environment for the real world, a semi-simulation semi-fictional three-dimensional environment, or a purely fictional three-dimensional environment.
- the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment.
- the following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but this is not limited.
- the virtual environment is also used in a virtual environment battle between at least two virtual characters.
- the virtual environment is also used for a battle between at least two virtual characters using virtual firearms.
- the virtual environment is also used to compete with at least two virtual characters using virtual firearms within the range of the target area, and the range of the target area will continue to decrease with the passage of time in the virtual environment.
- Virtual object refers to the movable object in the virtual environment.
- the movable object may be at least one of virtual characters, virtual animals, and cartoon characters.
- the virtual object when the virtual environment is a three-dimensional virtual environment, the virtual object is a three-dimensional stereo model created based on the animation skeleton technology.
- Each virtual object has its own shape and volume in the three-dimensional virtual environment and occupies a part of the space in the three-dimensional virtual environment.
- Observation scene It is a scene corresponding to at least one observation mode for observing the virtual environment.
- the angle of view of the observation method is the same, and the observation angle, observation distance, and observation configuration (such as: At least one parameter is different when the night vision device is turned on; when the observation scene corresponds to at least one observation method that observes the virtual environment at the target observation angle of the virtual object, the observation angle of the observation method is the same, the observation angle and the observation distance And at least one parameter in the observation configuration is different; when the observation scene corresponds to at least one observation method for observing the virtual environment by the target observation distance of the virtual object, the observation distance of the observation method is the same, the observation angle, observation angle, and observation At least one parameter in the configuration is different; when the observation parameter corresponds to at least one observation method that observes the virtual environment in the target observation configuration of the virtual object, the observation configuration of the observation method is the same; when the observation parameter corresponds to at least one observation method that observes the virtual environment in the target observation configuration of the virtual object, the observation configuration
- the observation scene is a scene corresponding to a specific observation mode for observing the virtual environment.
- the observation scene corresponds to scene characteristics
- the observation mode corresponding to the observation scene is a mode set for the scene characteristics
- the scene characteristics include at least one of light condition characteristics, scene height characteristics, and degree of concentration of virtual objects in the scene.
- the observation scenes in the virtual environment can be divided into various types, and multiple observation scenes can be superimposed to realize a new observation scene.
- the observation scene includes: indoor scene, outdoor scene, dim scene, bright At least one of a scene, a house area scene, a mountain scene, an air-raid shelter scene, and an object stacking scene, wherein the indoor scene can be superimposed with the dim scene to realize a new dim scene indoors, such as an indoor room without lights,
- the house scene can be superimposed with the mountain scene to realize a new house scene on the mountain.
- the camera model is a three-dimensional model located around the virtual object in a three-dimensional virtual environment.
- the camera model is located near the head of the virtual object or at the head of the virtual object.
- the camera model can be located behind the virtual object and bound to the virtual object, or it can be located at any position away from the virtual object at a preset distance.
- the virtual environment located in the three-dimensional virtual environment can be performed from different angles Observe that, optionally, when the third-person perspective is the first-person over-shoulder perspective, the camera model is located behind the virtual object (such as the head and shoulders of the virtual character).
- the camera model is not actually displayed in the three-dimensional virtual environment, that is, the camera model cannot be recognized in the three-dimensional virtual environment displayed on the user interface.
- the terminal in this application may be a desktop computer, laptop portable computer, mobile phone, tablet computer, e-book reader, MP3 (Moving Pictures Experts Group Audio Layer III, motion picture expert compression standard audio level 3) player, MP4 ( Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio layer 4) player and so on.
- An application program supporting the virtual environment such as an application program supporting the three-dimensional virtual environment, is installed and running on the terminal.
- the application program may be any of virtual reality application programs, three-dimensional map programs, military simulation programs, TPS games, FPS games, and MOBA games.
- the application program may be a stand-alone version of the application program, such as a stand-alone version of the 3D game program, or an online version of the application program.
- FIG. 1 shows a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
- the electronic device may specifically be the terminal 100, and the terminal 100 includes an operating system 120 and an application program 122.
- the operating system 120 is the basic software that provides the application 122 with secure access to computer hardware.
- the application 122 is an application that supports a virtual environment.
- the application 122 is an application that supports a three-dimensional virtual environment.
- the application 122 may be a virtual reality application, a three-dimensional map program, a military simulation program, a third-person shooting game (Third-Personal Shooting Game, TPS), a first-person shooting game (First-person shooting game, FPS), a MOBA game, Any kind of multiplayer shootout survival game.
- the application 122 may be a stand-alone version of the application, such as a stand-alone version of the 3D game program.
- FIG. 2 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
- the computer system 200 includes: a first device 220, a server 240, and a second device 260.
- the first device 220 has an application program that supports the virtual environment installed and running.
- the application can be any of virtual reality applications, three-dimensional map programs, military simulation programs, TPS games, FPS games, MOBA games, and multiplayer shootout survival games.
- the first device 220 is a device used by the first user.
- the first user uses the first device 220 to control the first virtual object located in the virtual environment to perform activities.
- the activities include but are not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking, shooting, attacking, and throwing.
- the first virtual object is a first virtual character, such as a simulated character character or anime character character.
- the first device 220 is connected to the server 240 through a wireless network or a wired network.
- the server 240 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
- the server 240 is used to provide background services for applications that support a three-dimensional virtual environment.
- the server 240 undertakes primary computing work, and the first device 220 and the second device 260 undertake secondary computing work; or, the server 240 undertakes secondary computing work, and the first device 220 and second device 260 undertake primary computing work Work; or, the server 240, the first device 220, and the second device 260 adopt a distributed computing architecture for collaborative computing.
- the second device 260 has an application program supporting the virtual environment installed and running.
- the application may be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer shootout survival games.
- the second device 260 is a device used by the second user.
- the second user uses the second device 260 to control the second virtual object located in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking, shooting, attacking, and throwing.
- the second virtual object is a second virtual character, such as a simulated character character or anime character character.
- first avatar and the second avatar are in the same virtual environment.
- first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
- first virtual character and the second virtual character may also belong to different teams, different organizations, or two groups with hostility.
- the application programs installed on the first device 220 and the second device 260 are the same, or the application programs installed on the two devices are the same type of application programs on different control system platforms.
- the first device 220 may refer to one of multiple devices, and the second device 260 may refer to one of multiple devices. In this embodiment, only the first device 220 and the second device 260 are used as examples.
- the device types of the first device 220 and the second device 260 are the same or different.
- the device types include: game console, desktop computer, smart phone, tablet computer, e-book reader, MP3 player, MP4 player, and laptop portable At least one of the computers. The following embodiments are exemplified by the case that the device is a desktop computer.
- the number of the above-mentioned devices may be more or less.
- the above equipment may be only one, or the above equipment may be dozens or hundreds, or more.
- the embodiments of the present application do not limit the number and types of devices.
- the camera model is located at a predetermined distance from the virtual object.
- a virtual object corresponds to a camera model, and the camera model can be rotated using the virtual object as the center of rotation, for example, rotating the camera model using any point of the virtual object as the center of rotation.
- the camera model not only rotates in angle, but also shifts in displacement.
- the distance between the camera model and the center of rotation remains unchanged, that is, the camera model uses the center of rotation as a sphere
- the sphere surface of the heart rotates.
- any point of the virtual object may be the head, torso of the virtual object, or any point around the virtual object, which is not limited in the embodiment of the present application.
- the viewing direction of the camera model is the direction in which the vertical line on the tangent plane of the spherical surface where the camera model is located points to the virtual object.
- the camera model can also observe the virtual environment at a preset angle in different directions of the virtual object.
- a point in the virtual object 31 is determined as the rotation center 32, and the camera model rotates around the rotation center 32.
- the camera model is configured with an initial position, which is The position above and behind the virtual object (such as the position behind the brain).
- the initial position is position 33.
- FIG. 4 is a method for observing a virtual environment provided by an exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 4, the method includes:
- step S401 a first environment screen of an application program is displayed.
- the first environment screen is a screen for observing the virtual environment in the first observation mode in the virtual environment.
- the first environment picture includes virtual objects in the first scene.
- the virtual object belongs to at least one scene in the virtual environment.
- the scene in the virtual environment includes at least one of an indoor scene, an outdoor scene, a dim scene, and a bright scene. Since the indoor scene and the outdoor scene It is two independent and complementary observation scenes, that is, if the virtual object is in an indoor scene or an outdoor scene.
- the first observation mode includes an observation mode corresponding to the first scene.
- each observation scene corresponds to an observation mode, and the corresponding relationship is preset.
- the observation method for observing the virtual environment may be the observation method corresponding to more than one observation scene Can also be one of the observation modes corresponding to more than one observation scene.
- priority can be set for different observation methods, and the observation method with a higher priority can be selected according to the priority to observe the virtual environment at that location, and You can randomly select one observation mode among multiple observation modes to observe the virtual environment.
- the observation mode corresponding to the indoor scene is to observe the virtual environment at a first distance from the virtual object
- the observation mode corresponding to the dim scene is to observe the virtual environment through a night vision device, then when the virtual object is indoors And in a dim scene, the virtual environment can be observed by the night vision device at a first distance from the virtual object, or by the night vision device only at the position of the virtual object at the second distance (the second distance Is the default distance) to observe the virtual environment.
- Step S402 Receive a movement operation, which is used to transfer the virtual object from the first scene to the second scene.
- the first scene and the second scene are two different observation scenes.
- the first scene and the second scene are two mutually complementary observation scenes, that is, if the virtual object is not in the first scene, it is in the second scene.
- the first scene is an outdoor scene, and the second scene is an indoor scene; or, the first scene is a bright scene, and the second scene is a dim scene; or, the first scene is an item stacking scene, and the second scene is a wild scene .
- transferring the virtual object from the first scene to the second scene according to the mobile operation may be implemented as transferring the virtual object from outdoor to indoor according to the mobile operation; then the first scene is an outdoor scene and the second scene is an indoor scene,
- the indoor scene can also be realized as a dim scene, that is, the first scene is a bright scene, and the second scene is a dim scene.
- the movement operation may be generated after the user slides on the touch display screen, or may be generated after the user presses a physical key of the mobile terminal
- the mobile operation may be an operation corresponding to the signal received by the terminal receiving an input from an external input device, such as: the user sends a mobile signal to the terminal by operating the mouse as a mobile operation, Or the user sends a mobile signal to the terminal as a mobile operation by operating the keyboard.
- Step S403 Adjust the first observation mode to the second observation mode according to the mobile operation, where the first observation mode corresponds to the first scene and the second observation mode corresponds to the second scene.
- the corresponding parameters in the observation mode include at least one of an observation angle, an observation distance, whether to turn on a night vision device, and an observation angle person.
- the terminal detects the observation scene where the virtual object is located in the virtual environment every preset time. In some embodiments, the detection process includes at least one of the following situations:
- the first scene is an outdoor scene
- the second scene is an indoor scene
- the collision detection method is used to detect the observation scene where the virtual object is located in the virtual environment, and when the detected virtual object moves from the outdoor scene according to the movement operation When reaching an indoor scene, adjust the first observation mode to the second observation mode;
- the first observation mode is a way for the camera model to observe the virtual environment at a first distance from the virtual object
- the second observation mode is for the camera model to observe the virtual environment at a second distance from the virtual object
- the camera model is a three-dimensional model that observes a virtual object in a virtual environment, the first distance is greater than the second distance, that is, the distance between the camera model and the virtual object is adjusted from the first distance to the second distance .
- the observation distance for observing the virtual object is the same, please refer to FIG. 5, in the indoor scene, when observing the virtual object 51
- the distance between the camera model 50 and the virtual object 51 is a.
- the distance between the camera model 50 and the virtual object 51 is also a when the virtual object 51 is observed.
- the distance between the camera model 50 and the virtual object 51 may be regarded as the distance between the physical center point of the camera model 50 and the virtual object 51, or may be regarded as any distance between the camera model 50 and the virtual object 51 distance.
- the distance between the camera model 60 and the virtual object 61 is a when observing the virtual object 61.
- the distance between the camera model 60 and the virtual object 61 is b, where b ⁇ a.
- the first observation mode may also be a mode in which the camera model observes the virtual environment from a first perspective
- the second observation mode is a mode in which the camera model observes the virtual environment from a second perspective.
- the angle between the direction of one viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction, that is, the angle at which the camera model observes the virtual object is rotated from the first viewing angle to the second viewing angle according to the movement operation.
- the angle between the camera model 70 and the horizontal direction 73 when observing the virtual object 71 is ⁇
- the camera model 70 when observing the virtual object 71 The angle to the virtual object 71 is ⁇ , where ⁇ .
- the first observation mode may also be a third person observation mode
- the second observation mode is the first person observation mode, that is, the observation angle of view is converted from the third person perspective to the first person perspective according to a mobile operation.
- the first scene is a bright scene
- the second scene is a dim scene
- the color scene is used to detect the observation scene of the virtual object in the virtual environment, and when the detected virtual object moves from the bright scene according to the movement operation
- the scene is dim, adjust the first observation mode to the second observation mode
- the first observation mode is an observation mode in which the night vision device is turned off, that is, virtual objects and virtual environments are not observed through the night vision device
- the second observation mode is the observation mode in which the night vision device is turned on. That is to observe the virtual environment through the night vision device.
- the color detection method is used to detect pixels in the display interface.
- the virtual object is considered to move from the first scene to the second scene .
- the first scene is a field scene
- the second scene is a stacking scene of objects
- the scene scene where the virtual object is located in the virtual environment is detected through the scene identification verification method, and when the virtual object is detected, it is moved from the field according to the mobile operation.
- the first observation mode is adjusted to the second observation mode.
- the coordinates corresponding to the position of the virtual object correspond to a scene identifier, and the scene where the virtual object is located is verified according to the scene identifier.
- the first observation mode includes a camera model observing the virtual environment at a first distance from the virtual object
- the second observation mode includes the camera model observing the virtual environment at a second distance from the virtual object
- the camera model includes a three-dimensional model that observes the virtual object in the virtual environment, the first distance is greater than the second distance, that is, the distance between the camera model and the virtual object is adjusted from the first distance to the second distance .
- the first observation mode may also be a mode in which the camera model observes the virtual environment from a first perspective
- the second observation mode is a mode in which the camera model observes the virtual environment from a second perspective.
- the angle between the direction of one viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction, that is, the angle at which the camera model observes the virtual object is rotated from the first viewing angle to the second viewing angle according to the movement operation.
- the first observation mode may also be a third person observation mode
- the second observation mode is the first person observation mode, that is, the observation angle of view is converted from the third person perspective to the first person perspective according to a mobile operation.
- step S404 a second environment screen of the application program is displayed.
- the second environment screen is a screen for observing the virtual environment in the second observation mode in the virtual environment.
- the second environment picture includes virtual objects in the second scene.
- the first environment picture and the second environment picture are described or described in conjunction with the above-mentioned first scene being an outdoor scene and the second scene being an indoor scene, and adjusting the distance between the camera model and the virtual object as an example.
- the virtual objects in the outdoor scene and the indoor scene of the environmental screen are described, please refer to FIG. 8, in the indoor scene, the first screen 81 includes the virtual object 82, according to the virtual door 83 and the virtual cabinet 84 The virtual object 82 is in an indoor scene; while in an outdoor scene, the second screen 85 includes a virtual object 82.
- the virtual object 82 is in an outdoor scene
- the second screen 85 also includes a virtual object 87
- the virtual object 87 forms a block below the virtual object 82.
- the first environment picture and the second environment picture corresponding to the scheme involved in this application will be described.
- the first environment picture 91 includes a virtual object 92 according to the virtual door 93 and the virtual The cabinet 94 can know that the virtual object 92 is in an indoor scene; while in the outdoor scene, the second environment picture 95 includes the virtual object 92.
- the virtual cloud 96 it can be known that the virtual object 92 is in the outdoor scene, and the second picture 85 The virtual object forms a blocked virtual object 87.
- the virtual object 87 cannot be displayed on the second environment screen 95, that is, the The virtual object 87 does not block the line of sight of the virtual object or the camera model.
- the way of observing the virtual environment changes the way of observing the virtual object in the virtual environment according to the different observing scene where the virtual object is located, so as to adapt the observation to the observing scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the improper configuration of observation when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
- the first scene is an outdoor scene and the second scene is an indoor scene.
- the terminal detects the observation scene where the virtual object is located in the virtual environment through the collision detection method, and the collision detection method is vertical Ray detection.
- FIG. 10 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 10, the method includes:
- step S1001 the first environment screen of the application program is displayed.
- the first environment picture includes a virtual object in the first scene, and the first environment picture is a picture of the virtual environment observed in the virtual environment in the first observation mode.
- the virtual object belongs to at least one scene in the virtual environment.
- the observation scene in the virtual environment includes any one of the indoor scene and the outdoor scene, because the indoor scene and the outdoor scene are two independent complementary The observed scene, that is, if the virtual object is not in the indoor scene, it is in the outdoor scene.
- step S1002 a mobile operation is received.
- the move operation is used to transfer the virtual object from the first scene to the second scene, where the first scene is an outdoor scene and the second scene is an indoor scene, that is, the move operation is used to move the virtual object Move from outdoor scene to indoor scene.
- step S1003 the target point in the virtual object is used as a starting point, and vertical ray detection is performed along the vertical upward direction in the virtual environment.
- the target point may be any one of a physical center point in the virtual object, a point corresponding to the head, a point corresponding to the arm, and a point corresponding to the leg, or may be any point in the virtual object , Can also be any point corresponding to the virtual object other than the virtual object.
- the vertical ray detection may also be a ray done vertically downward in the virtual environment.
- the coordinate system 111 is the applied three-dimensional coordinate system in the virtual environment, where the direction pointed by the z-axis is the vertical upward direction in the virtual environment, and the terminal can move from the target of the virtual object 112
- Point 113 is the starting point, and a vertical ray 114 is detected along the vertical upward direction for detection.
- the vertical ray 114 is taken as an example for illustration in FIG. 11. In an actual application scenario, the vertical ray 114 may not be displayed in the environment picture.
- Step S1004 Receive the first detection result returned after performing vertical ray detection.
- the first detection result is used to represent the virtual object that is collided in the vertical upward direction of the virtual object.
- the first detection result includes the object identification of the first virtual object collided by the vertical ray detection, and/or the length of the ray when the vertical ray detection collides with the first virtual object.
- the first detection result is empty.
- Step S1005 Determine the observation scene where the virtual object is located according to the first detection result.
- the way that the terminal determines the observation scene where the virtual object is located according to the first detection result includes any one of the following ways:
- the first detection result includes the object identifier of the first virtual object collided by the vertical ray detection, then when the object identifier in the first detection result is the virtual house identifier, the terminal determines that the observation scene where the virtual object is located is indoor Scenes.
- the terminal may determine the observation scene in which the virtual object is located It is an outdoor scene, that is, when the object identifier in the first detection result is other than the virtual house identifier, the terminal determines that the observation scene where the virtual object is located is an outdoor scene.
- the virtual object 120 is in an indoor scene, and a vertical ray is detected from the target point 121 of the virtual object 120 in a vertical upward direction, and the vertical ray 122 returns after colliding with a virtual house.
- the terminal determines that the virtual object 120 is in a virtual house, that is, in an indoor scene; while the virtual object 130 in FIG. 13 is in an outdoor scene, vertical ray detection is performed vertically upward from the target point 131 of the virtual object 130 Since the vertical ray 132 does not collide with a virtual object, after returning a null value (English: null), it is determined that the virtual object 130 is in an outdoor scene.
- a null value English: null
- the first detection result includes the length of the ray when it collides with the first virtual object during vertical ray detection, then when the length of the ray in the first detection result is less than or equal to the preset length, the terminal determines the location of the virtual object The observation scene is an indoor scene. When the length of the ray in the first detection result exceeds a preset length, the terminal determines that the observation scene where the virtual object is located is an outdoor scene.
- the terminal may determine that the virtual object is in the indoor scene, when the ray in the first detection result When the length exceeds 2m, the terminal can determine that the virtual object is in an outdoor scene.
- the execution of the above steps 1003 to 1005 runs through the entire process of displaying on the environmental screen, that is, for each frame of the environmental screen, the observation scene where the virtual object is located is detected.
- every second includes 30 frames of environmental images, and every second the terminal needs to perform 30 detections on the observation scene where the virtual object is located.
- Step S1006 when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation, the first observation mode is adjusted to the second observation mode.
- the first observation mode corresponds to the first scene
- the second observation mode corresponds to the second scene
- Step S1007 Display the second environment screen of the application.
- the second environment picture includes virtual objects in the second scene, and the second environment picture is a picture of the virtual environment observed in the virtual environment in the second observation mode.
- the way of observing the virtual environment transforms the way of observing the virtual object in the virtual environment according to the different observation scene where the virtual object is located, so as to adapt the observation to the observation scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the improper configuration of observation when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
- the method provided in this embodiment judges the observation scene where the virtual object is located by vertical ray detection, and detects the observation scene where the virtual object is located in a convenient and accurate manner to avoid the occurrence of different observations caused by the single observation method
- the problem of combat is affected due to improper observation angle, observation distance, and observation configuration.
- the first scene is an outdoor scene and the second scene is an indoor scene.
- the terminal detects the observation scene where the virtual object is located in the virtual environment through the collision detection method, and the collision detection method is horizontal Ray detection.
- FIG. 14 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 14, the method includes:
- Step S1401 Display the first environment screen of the application.
- the first environment picture includes a virtual object in the first scene, and the first environment picture is a picture of the virtual environment observed in the virtual environment in the first observation mode.
- the virtual object belongs to at least one observation scene in the virtual environment.
- the observation scene in the virtual environment includes any one of the indoor scene and the outdoor scene. Since the indoor scene and the outdoor scene are two independent Complementary observation scene, that is, if the virtual object is not in the indoor scene, it is in the outdoor scene.
- Step S1402 Receive a mobile operation.
- the move operation is used to transfer the virtual object from the first scene to the second scene, where the first scene is an outdoor scene and the second scene is an indoor scene, that is, the move operation is used to move the virtual object Move from outdoor scene to indoor scene.
- step S1403 starting from the target point in the virtual object as the starting point, at least three detection rays with mutually different directions are made along the horizontal direction in the virtual environment.
- the target point may be any one of a physical center point in the virtual object, a point corresponding to the head, a point corresponding to the arm, and a point corresponding to the leg, or may be any point in the virtual object , Can also be any point corresponding to the virtual object other than the virtual object.
- the angle between each two of the at least three detection rays is greater than the preset angle.
- the minimum included angle between each two detected rays is 90°, then at most 4 detected rays, when three detected rays, the included angle between each two rays is 120°, or two The included angle is 90°, the third included angle is 180°, and any combination of each included angle is greater than or equal to 90°.
- FIG. 15 shows a top view of the virtual object 1501, wherein the target point 1502 of the virtual object 1501 is the starting point, and the detection ray 1503, the detection ray 1504, and the detection are performed in the horizontal direction Ray 1505, wherein the angle between the detection ray 1503 and the detection ray 1504 is 90°, the angle between the detection ray 1504 and the detection ray 1505 is 110°, and the angle between the detection ray 1503 and the detection ray 1505 is 160°.
- Step S1404 Receive a second detection result returned by horizontal radiation detection through at least three detection radiations.
- the second detection result is used to represent the virtual object that the detection rays collide in the horizontal direction.
- Step S1405 Determine the observation scene where the virtual object is located according to the second detection result.
- the manner of determining the observation scene where the virtual object is located according to the second detection result includes any one of the following ways:
- the second detection result includes the ray length when at least three detection rays collide with the first virtual object. If at least three of the detection rays collide with the first virtual object, the ray length is not less than half Within the set length, the terminal can determine that the virtual object is in the indoor scene; if at least three detection rays, more than half of the detection rays collide with the first virtual object to produce a preset length, the terminal can determine the virtual object is located
- the observation scene is an outdoor scene;
- the second detection result includes the object identifier of the first virtual object collided by at least three rays, if at least three of the detected rays, the object identifier of the first virtual object collided by not less than half of the detected rays is the house identifier
- the terminal may determine that the virtual object is in an indoor scene; if the object identifier of the first virtual object collided by more than half of the detected rays in at least three detection rays is not a house identifier, the terminal may determine that the virtual object is in an outdoor scene.
- Step S1406 when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation, the first observation mode is adjusted to the second observation mode.
- the first observation mode corresponds to the first scene
- the second observation mode corresponds to the second scene
- Step S1407 displaying the second environment screen of the application.
- the second environment picture includes virtual objects in the second scene, and the second environment picture is a picture of the virtual environment observed in the virtual environment in the second observation mode.
- the way of observing the virtual environment changes the way of observing the virtual object in the virtual environment according to the different observing scene where the virtual object is located, so as to adapt the observation to the observing scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the inappropriate configuration of the observation object when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
- the method provided in this embodiment judges the observation scene where the virtual object is located by horizontal ray detection, and detects the observation scene where the virtual object is located in a convenient and accurate manner to avoid the occurrence of different observations due to the single observation method.
- the problem of combat is affected due to improper observation angle, observation distance, and observation configuration.
- FIG. 16 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. As shown in FIG. 16, the method includes:
- Step S1601 The client detects the observation scene where the virtual object is located for each frame of image.
- every second includes 30 frames of environmental images, and every second the terminal needs to perform 30 detections on the observation scene where the virtual object is located.
- Step S1602 the user controls the virtual object in the client to enter the room.
- the terminal receives a movement operation, which is used to control the movement of the virtual object in the virtual environment.
- step S1603 the client detects that the virtual object is in the indoor scene through the rays.
- Step S1604 Adjust the distance between the camera model and the virtual object from the first distance to the second distance.
- the first distance is greater than the second distance, that is, when the virtual object moves from the outdoor scene to the indoor scene, the distance between the camera model and the virtual object is reduced.
- Step S1605 the user controls the virtual object in the client to move to the outdoor.
- Step S1606 the client detects that the virtual object is in the outdoor scene through the ray.
- Step S1607 Adjust the distance between the camera model and the virtual object from the second distance to the first distance.
- the way of observing the virtual environment transforms the way of observing the virtual object in the virtual environment according to the different observation scene where the virtual object is located, so as to adapt the observation to the observation scene Observe the virtual objects in the scene by way of avoiding the observation of virtual objects in the same observation mode under different observation scenarios due to the single observation mode, due to the inappropriate observation angle, observation distance, or observation configuration. Issues affecting operations.
- the method provided in this embodiment shortens the distance between the camera model and the virtual object when the virtual object is in an indoor scene to reduce the situation that the virtual object blocks the line of sight.
- FIG. 17 is a structural block diagram of an apparatus for observing a virtual environment provided by an exemplary embodiment of the present application.
- the apparatus may be implemented in the terminal 100 shown in FIG. 1.
- the apparatus includes:
- the display module 1710 is configured to display a first environment screen of an application program, where the first environment screen includes virtual objects in the first scene, and the first environment screen observes the virtual environment in the virtual environment in a first observation manner Screen.
- the receiving module 1720 is used to receive a mobile operation, and the mobile operation is used to transfer a virtual object from the first scene to the second scene.
- the first scene and the second scene are two different observation scenes. Observing the scene and observing the virtual environment Corresponds to at least one observation method.
- the adjustment module 1730 is configured to adjust the first observation mode to the second observation mode according to the movement operation, where the first observation mode corresponds to the first scene and the second observation mode corresponds to the second scene.
- the display module 1710 is also used to display the second environment screen of the application, the second environment screen includes virtual objects in the second scene, and the second environment screen is to observe the virtual environment in the virtual environment in the second observation mode Screen.
- the first scene includes an outdoor scene
- the second scene includes an indoor scene
- the adjustment module 1730 includes:
- the detection unit 1731 is configured to detect the observation scene where the virtual object is located in the virtual environment through the collision detection method.
- the adjusting unit 1732 is configured to adjust the first observation mode to the second observation mode when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation.
- the first observation mode includes a camera model observing the virtual environment at a first distance from the virtual object
- the second observation mode includes the camera model observing the virtual environment at a second distance from the virtual object
- the camera model includes a three-dimensional model for observation around the virtual object in the virtual environment, and the first distance is greater than the second distance.
- the adjusting unit 1732 is also used to adjust the distance between the camera model and the virtual object from the first distance to the second distance.
- the first observation mode includes a camera model observing the virtual environment from a first perspective
- the second observation mode includes a camera model observing the virtual environment from a second perspective
- the camera model includes For a three-dimensional model observed around a virtual object, the angle between the direction of the first viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction; the adjusting unit 1732 is also used to convert the camera model according to the mobile operation The angle of viewing the virtual object rotates from the first perspective to the second perspective.
- the collision detection method is vertical ray detection; the detection unit 1731 is also used to perform vertical ray detection along the vertical upward direction in the virtual environment from the target point in the virtual object as the starting point; The first detection result returned after the vertical ray detection.
- the first detection result is used to represent the virtual object that is collided in the vertical upward direction of the virtual object; the observation scene where the virtual object is located is determined according to the first detection result.
- the first detection result includes the object identification of the first virtual object collided by the vertical ray detection; the detection unit 1731 is also used when the object identification in the first detection result is the virtual house identification, It is determined that the observation scene where the virtual object is located is an indoor scene; the detection unit 1731 is also used to determine that the observation scene where the virtual object is located is outdoor when the object identifier in the first detection result is other than the virtual house identifier Scenes.
- the first detection result includes the length of the ray when the vertical ray detection collides with the first virtual object; the detection unit 1731 is also used when the length of the ray in the first detection result is less than or equal to the preset length When determining that the observation scene where the virtual object is located is an indoor scene; the detection unit 1731 is also used to determine that the observation scene where the virtual object is located is an outdoor scene when the length of the ray in the first detection result exceeds a preset length.
- the collision detection method includes horizontal ray detection; the detection unit 1731 is also used to start from the target point in the virtual object and perform at least three different directions along the horizontal direction in the virtual environment. Detecting rays, and the included angle between each two detecting rays is greater than the preset included angle; receiving the second detection result returned by horizontal radiation detection through at least three detecting rays, the second detection result is used to indicate that the detecting rays are in the horizontal direction The virtual object collided on; determine the observation scene where the virtual object is located according to the second detection result.
- the second detection result includes the length of the rays when at least three detection rays collide with the first virtual object; the detection unit 1731 is also used to detect not less than half of the at least three detection rays The ray length when the ray collides with the first virtual object is within the preset length, and it is determined that the virtual object is in the indoor scene; the detection unit 1731 is also used when more than half of the at least three detection rays collide with the first virtual object The ray length exceeds the preset length, and it is determined that the virtual object is in the outdoor scene.
- the receiving module 1720 and the adjusting module 1730 in the above embodiments may be implemented by a processor or may be implemented in cooperation with a processor and a memory; the display module 1710 in the above embodiments may be implemented by a display screen or may be processed by The display and display are coordinated.
- FIG. 19 shows a structural block diagram of a terminal 1900 provided by an exemplary embodiment of the present invention.
- the terminal 1900 may be: a smartphone, a tablet computer, an MP3 player (Moving Pictures Experts Group Audio Audio Layer III, motion picture expert compression standard audio level 3), MP4 (Moving Pictures Experts Group Audio Audio Layer IV, motion picture expert compression standard audio Level 4) Player, laptop or desktop computer.
- the terminal 1900 may also be called other names such as user equipment, portable terminal, laptop terminal, and desktop terminal.
- the terminal 1900 includes a processor 1901 and a memory 1902.
- the processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
- the processor 1901 may adopt at least one hardware form from DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). achieve.
- the processor 1901 may also include a main processor and a coprocessor.
- the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in the standby state.
- the processor 1901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
- the processor 1901 may further include an AI (Artificial Intelligence, Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
- AI Artificial Intelligence, Artificial Intelligence
- the memory 1902 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
- the non-transitory computer-readable storage medium in the memory 1902 is used to store at least one instruction that is executed by the processor 1901 to implement the virtual simulation provided by the method embodiment in the present application. The way to observe the environment.
- the terminal 1900 may optionally include a peripheral device interface 1903 and at least one peripheral device.
- the processor 1901, the memory 1902, and the peripheral device interface 1903 may be connected by a bus or a signal line.
- Each peripheral device may be connected to the peripheral device interface 1903 through a bus, a signal line, or a circuit board.
- the peripheral device includes at least one of a radio frequency circuit 1904, a touch display screen 1905, a camera 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.
- the peripheral device interface 1903 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1901 and the memory 1902.
- the processor 1901, the memory 1902, and the peripheral device interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1901, the memory 1902, and the peripheral device interface 1903, or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the radio frequency circuit 1904 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 1904 communicates with a communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 1904 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
- the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so on.
- the radio frequency circuit 1904 can communicate with other terminals through at least one wireless communication protocol.
- the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
- the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which is not limited in this application.
- the display screen 1905 is used to display a UI (User Interface).
- the UI may include graphics, text, icons, video, and any combination thereof.
- the display screen 1905 also has the ability to collect touch signals on or above the surface of the display screen 1905.
- the touch signal can be input to the processor 1901 as a control signal for processing.
- the display screen 1905 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- the display screen 1905 there may be one display screen 1905, which is provided with the front panel of the terminal 1900; in other embodiments, the display screen 1905 may be at least two, which are respectively provided on different surfaces of the terminal 1900 or have a folded design; In still other embodiments, the display screen 1905 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 1900. Even, the display screen 1905 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
- the display screen 1905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light emitting diode) and other materials.
- the camera assembly 1906 is used to collect images or videos.
- the camera assembly 1906 includes a front camera and a rear camera.
- the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
- there are at least two rear cameras which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to realize panoramic shooting and VR (Virtual Reality, virtual reality) shooting function or other fusion shooting functions.
- the camera assembly 1906 may also include a flash.
- the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
- the audio circuit 1907 may include a microphone and a speaker.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1901 for processing, or input them to the radio frequency circuit 1904 to implement voice communication.
- the microphone can also be an array microphone or an omnidirectional acquisition microphone.
- the speaker is used to convert the electrical signal from the processor 1901 or the radio frequency circuit 1904 into sound waves.
- the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
- the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
- the audio circuit 1907 may also include a headphone jack.
- the positioning component 1908 is used to locate the current geographic location of the terminal 1900 to implement navigation or LBS (Location Based Service, location-based service).
- the positioning component 1908 may be a positioning component based on the GPS (Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
- the power supply 1909 is used to supply power to various components in the terminal 1900.
- the power source 1909 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
- the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
- the wired rechargeable battery is a battery charged through a wired line
- the wireless rechargeable battery is a battery charged through a wireless coil.
- the rechargeable battery can also be used to support fast charging technology.
- the terminal 1900 further includes one or more sensors 1910.
- the one or more sensors 1910 include, but are not limited to: an acceleration sensor 1911, a gyro sensor 1912, a pressure sensor 1913, a fingerprint sensor 1914, an optical sensor 1915, and a proximity sensor 1916.
- the acceleration sensor 1911 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established with the terminal 1900.
- the acceleration sensor 1911 can be used to detect the components of gravity acceleration on three coordinate axes.
- the processor 1901 may control the touch display 1905 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1911.
- the acceleration sensor 1911 can also be used for game or user movement data collection.
- the gyro sensor 1912 can detect the body direction and rotation angle of the terminal 1900, and the gyro sensor 1912 can cooperate with the acceleration sensor 1911 to collect a 3D action of the user on the terminal 1900.
- the processor 1901 can realize the following functions based on the data collected by the gyro sensor 1912: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
- the pressure sensor 1913 may be disposed on the side frame of the terminal 1900 and/or the lower layer of the touch display 1905.
- the processor 1901 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1913.
- the processor 1901 controls the operability control on the UI interface according to the user's pressure operation on the touch display 1905.
- the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
- the fingerprint sensor 1914 is used to collect the user's fingerprint, and the processor 1901 recognizes the user's identity according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 recognizes the user's identity based on the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 1901 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
- the fingerprint sensor 1914 may be provided on the front, back, or side of the terminal 1900. When a physical button or manufacturer logo is provided on the terminal 1900, the fingerprint sensor 1914 may be integrated with the physical button or manufacturer logo.
- the optical sensor 1915 is used to collect the ambient light intensity.
- the processor 1901 may control the display brightness of the touch display 1905 according to the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display 1905 is turned up; when the ambient light intensity is low, the display brightness of the touch display 1905 is turned down.
- the processor 1901 can also dynamically adjust the shooting parameters of the camera assembly 1906 according to the ambient light intensity collected by the optical sensor 1915.
- the proximity sensor 1916 also called a distance sensor, is usually provided on the front panel of the terminal 1900.
- the proximity sensor 1916 is used to collect the distance between the user and the front of the terminal 1900.
- the processor 1901 controls the touch display 1905 to switch from the bright screen state to the breathing state; when the proximity sensor 1916 detects When the distance from the user to the front of the terminal 1900 gradually becomes larger, the processor 1901 controls the touch display 1905 to switch from the screen-on state to the screen-on state.
- FIG. 19 does not constitute a limitation on the terminal 1900, and may include more or fewer components than those illustrated, or combine certain components, or adopt different component arrangements.
- An embodiment of the present application also provides a terminal for observing a virtual environment.
- the terminal includes a processor and a memory, and the memory stores computer-readable instructions.
- the processing The device executes the steps of the above observation method of the virtual environment.
- the steps of the method for observing the virtual environment may be the steps in the method for observing the virtual environment of the foregoing embodiments.
- An embodiment of the present application further provides a computer-readable storage medium that stores computer-readable instructions.
- the processor When the computer-readable instructions are executed by a processor, the processor is caused to perform the steps of the above method for observing a virtual environment.
- the steps of the method for observing the virtual environment may be the steps in the method for observing the virtual environment of the foregoing embodiments.
- a person of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing related hardware, and the program may be stored in a computer-readable storage medium, and the computer-readable storage
- the medium may be a computer-readable storage medium included in the memory in the foregoing embodiments; or it may be a computer-readable storage medium that exists alone and is not installed in the terminal.
- At least one instruction, at least one program, code set or instruction set is stored in the computer-readable storage medium, the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor In order to realize the method for observing the virtual environment as described in any one of FIG. 4, FIG. 10, FIG. 14 and FIG. 16.
- the computer-readable storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), solid state drive (SSD, Solid State Drives), or optical disc Wait.
- random access memory can include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
- ReRAM resistive random access memory
- DRAM Dynamic Random Access Memory
- the program may be stored in a computer-readable storage medium.
- the mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (20)
- 一种对虚拟环境进行观察的方法,由终端执行,其特征在于,所述方法包括:显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及显示所述应用程序的第二环境画面,所述第二环境画面中包括处于所述第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
- 根据权利要求1所述的方法,其特征在于,所述第一场景包括室外场景,所述第二场景包括室内场景,所述根据所述移动操作将所述第一观察方式调整为第二观察方式,包括:通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测;及当检测到所述虚拟对象根据所述移动操作从所述室外场景转移至所述室内场景时,将所述第一观察方式调整为第二观察方式。
- 根据权利要求2所述的方法,其特征在于,所述第一观察方式包括摄像机模型在距离所述虚拟对象第一距离处对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型在距离所述虚拟对象第二距离处对所述虚拟环境进行观察的方式,所述摄像机模型包括在所述虚拟环境中围绕所述虚拟对象进行观察的三维模型,所述第一距离大于所述第二距离;所述将所述第一观察方式调整为第二观察方式,包括:将所述摄像机模型与所述虚拟对象之间的距离从所述第一距离调整为所述第二距离。
- 根据权利要求2所述的方法,其特征在于,所述第一观察方式包括摄像机模型以第一视角对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型以第二视角对所述虚拟环境进行观察的方式,所述摄像机模型包括围绕所述虚拟对象进行观察的三维模型,所述第一视角的方向与虚拟环境中水平方向的夹角小于所述第二视角的方向与所述水平方向的夹角;所述根据所述移动操作将所述第一观察方式调整为第二观察方式,包括:根据所述移动操作,将所述摄像机模型观察所述虚拟对象的角度从所述第一视角旋转至所述第二视角。
- 根据权利要求2至4任一所述的方法,其特征在于,所述碰撞检测方式包括垂直射线检测;所述通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测,包括:从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的垂直向上方向做所述垂直射线检测;接收进行所述垂直射线检测后返回的第一检测结果,所述第一检测结果用于表示在所述虚拟对象的垂直向上方向被碰撞的虚拟物体;及根据所述第一检测结果确定所述虚拟对象所处的观察场景。
- 根据权利要求5所述的方法,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞的第一个虚拟物体的物体标识;所述根据所述第一检测结果确定所述虚拟对象所处的观察场景,包括:当所述第一检测结果中的所述物体标识为虚拟房屋标识时,确定所述虚拟对象所处的观察场景为所述室内场景;及当所述第一检测结果中的所述物体标识为除所述虚拟房屋标识之外的其他标识时,确定所述虚拟对象所处的观察场景为所述室外场景。
- 根据权利要求5所述的方法,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞第一个虚拟物体时射线的长度;所述根据所述第一检测结果确定所述虚拟对象所处的观察场景,包括:当所述第一检测结果中所述射线的长度小于或等于预设长度时,确定所述虚拟对象所处的观察场景为所述室内场景;及当所述第一检测结果中所述射线的长度超出所述预设长度时,确定所述 虚拟对象所处的观察场景为所述室外场景。
- 根据权利要求2至4任一所述的方法,其特征在于,所述碰撞检测方式包括水平射线检测;所述通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测,包括:从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的水平方向做至少三条方向互不相同的检测射线,且每两条所述检测射线之间的夹角大于预设夹角;接收通过至少三条所述检测射线进行水平射线检测所返回的第二检测结果,所述第二检测结果用于表示所述检测射线在所述水平方向上碰撞的虚拟物体;及根据所述第二检测结果确定所述虚拟对象所处的观察场景。
- 根据权利要求8所述的方法,其特征在于,所述第二检测结果包括至少三条所述检测射线碰撞第一个虚拟物体时的射线长度;所述根据所述第二检测结果确定所述虚拟对象所处的观察场景,包括:若至少三条所述检测射线中,不少于半数的所述检测射线碰撞第一个虚拟物体时的所述射线长度在预设长度以内,确定所述虚拟对象处于所述室内场景;及若至少三条所述检测射线中,超出半数的所述检测射线碰撞所述第一个虚拟物体时的所述射线长度超出所述预设长度,确定所述虚拟对象处于所述室外场景。
- 一种对虚拟环境进行观察的装置,其特征在于,所述装置包括:显示模块,用于显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;接收模块,用于接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;调整模块,用于根据所述移动操作将所述第一观察方式调整为第二观察 方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及所述显示模块,还用于显示所述应用程序的第二环境画面,所述第二环境画面中包括处于所述第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
- 根据权利要求10所述的装置,其特征在于,所述第一场景包括室外场景,所述第二场景包括室内场景,所述调整模块,包括:检测单元,用于通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测;及调整单元,用于当检测到所述虚拟对象根据所述移动操作从所述室外场景转移至所述室内场景时,将所述第一观察方式调整为第二观察方式。
- 根据权利要求11所述的装置,其特征在于,所述第一观察方式包括摄像机模型在距离所述虚拟对象第一距离处对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型在距离所述虚拟对象第二距离处对所述虚拟环境进行观察的方式,所述摄像机模型包括在所述虚拟环境中围绕所述虚拟对象进行观察的三维模型,所述第一距离大于所述第二距离;所述调整单元,还用于将所述摄像机模型与所述虚拟对象之间的距离从所述第一距离调整为所述第二距离。
- 根据权利要求11所述的装置,其特征在于,所述第一观察方式包括摄像机模型以第一视角对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型以第二视角对所述虚拟环境进行观察的方式,所述摄像机模型包括围绕所述虚拟对象进行观察的三维模型,所述第一视角的方向与虚拟环境中水平方向的夹角小于所述第二视角的方向与所述水平方向的夹角;所述调整单元,还用于根据所述移动操作,将所述摄像机模型观察所述虚拟对象的角度从所述第一视角旋转至所述第二视角。
- 根据权利要求11至13任一所述的装置,其特征在于,所述碰撞检测方式包括垂直射线检测;所述检测单元,还用于从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的垂直向上方向做所述垂直射线检测;接收进行所述垂直射线检测 后返回的第一检测结果,所述第一检测结果用于表示在所述虚拟对象的垂直向上方向被碰撞的虚拟物体;及根据所述第一检测结果确定所述虚拟对象所处的观察场景。
- 根据权利要求14所述的装置,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞的第一个虚拟物体的物体标识;所述检测单元,还用于当所述第一检测结果中的所述物体标识为虚拟房屋标识时,确定所述虚拟对象所处的观察场景为所述室内场景;及所述检测单元,还用于当所述第一检测结果中的所述物体标识为除所述虚拟房屋标识之外的其他标识时,确定所述虚拟对象所处的观察场景为所述室外场景。
- 根据权利要求14所述的装置,其特征在于,所述第一检测结果包括进行所述垂直射线检测碰撞第一个虚拟物体时射线的长度;所述检测单元,还用于当所述第一检测结果中所述射线的长度小于或等于预设长度时,确定所述虚拟对象所处的观察场景为所述室内场景;及所述检测单元,还用于当所述第一检测结果中所述射线的长度超出所述预设长度时,确定所述虚拟对象所处的观察场景为所述室外场景。
- 根据权利要求11至13任一所述的装置,其特征在于,所述碰撞检测方式包括水平射线检测;所述检测单元,还用于从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的水平方向做至少三条方向互不相同的检测射线,且每两条所述检测射线之间的夹角大于预设夹角;接收通过至少三条所述检测射线进行水平射线检测所返回的第二检测结果,所述第二检测结果用于表示所述检测射线在所述水平方向上碰撞的虚拟物体;及根据所述第二检测结果确定所述虚拟对象所处的观察场景。
- 根据权利要求17所述的装置,其特征在于,所述第二检测结果包括至少三条所述检测射线碰撞第一个虚拟物体时的射线长度;所述检测单元,还用于若至少三条所述检测射线中,不少于半数的所述检测射线碰撞第一个虚拟物体时的所述射线长度在预设长度以内,确定所述虚拟对象处于所述室内场景;及所述检测单元,还用于若至少三条所述检测射线中,超出半数的所述检 测射线碰撞所述第一个虚拟物体时的所述射线长度超出所述预设长度,确定所述虚拟对象处于所述室外场景。
- 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至9中任一项所述的方法的步骤。
- 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1至9中任一项所述的方法的步骤。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217006432A KR102693824B1 (ko) | 2018-12-05 | 2019-11-05 | 가상 환경 관찰 방법, 기기 및 저장 매체 |
SG11202103706SA SG11202103706SA (en) | 2018-12-05 | 2019-11-05 | Virtual environment viewing method, device and storage medium |
JP2021514085A JP7191210B2 (ja) | 2018-12-05 | 2019-11-05 | 仮想環境の観察方法、デバイス及び記憶媒体 |
US17/180,018 US11783549B2 (en) | 2018-12-05 | 2021-02-19 | Method for observing virtual environment, device, and storage medium |
US18/351,780 US20230360343A1 (en) | 2018-12-05 | 2023-07-13 | Method for observing virtual environment, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811478458.2A CN109634413B (zh) | 2018-12-05 | 2018-12-05 | 对虚拟环境进行观察的方法、设备及存储介质 |
CN201811478458.2 | 2018-12-05 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/180,018 Continuation US11783549B2 (en) | 2018-12-05 | 2021-02-19 | Method for observing virtual environment, device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020114176A1 true WO2020114176A1 (zh) | 2020-06-11 |
Family
ID=66071135
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/115623 WO2020114176A1 (zh) | 2018-12-05 | 2019-11-05 | 对虚拟环境进行观察的方法、设备及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11783549B2 (zh) |
JP (1) | JP7191210B2 (zh) |
KR (1) | KR102693824B1 (zh) |
CN (1) | CN109634413B (zh) |
SG (1) | SG11202103706SA (zh) |
WO (1) | WO2020114176A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112843716A (zh) * | 2021-03-17 | 2021-05-28 | 网易(杭州)网络有限公司 | 虚拟物体提示与查看方法、装置、计算机设备及存储介质 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109634413B (zh) * | 2018-12-05 | 2021-06-11 | 腾讯科技(深圳)有限公司 | 对虚拟环境进行观察的方法、设备及存储介质 |
CN110585707B (zh) * | 2019-09-20 | 2020-12-11 | 腾讯科技(深圳)有限公司 | 视野画面显示方法、装置、设备及存储介质 |
US20220414991A1 (en) * | 2019-12-23 | 2022-12-29 | Sony Group Corporation | Video generation apparatus, method for generating video, and program of generating video |
CN111784844B (zh) * | 2020-06-09 | 2024-01-05 | 北京五一视界数字孪生科技股份有限公司 | 观察虚拟对象的方法、装置、存储介质及电子设备 |
CN114011074A (zh) * | 2021-09-23 | 2022-02-08 | 腾讯科技(深圳)有限公司 | 虚拟道具的控制方法和装置、存储介质及电子设备 |
CN117339205A (zh) * | 2022-06-29 | 2024-01-05 | 腾讯科技(成都)有限公司 | 画面显示方法、装置、设备、存储介质及程序产品 |
US11801448B1 (en) | 2022-07-01 | 2023-10-31 | Datadna, Inc. | Transposing virtual content between computing environments |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1595584A1 (en) * | 2004-05-11 | 2005-11-16 | Sega Corporation | Image processing program, game information processing program and game information processing apparatus |
CN105278676A (zh) * | 2014-06-09 | 2016-01-27 | 伊默森公司 | 基于视角和/或接近度修改触觉强度的可编程触觉设备和方法 |
CN107977141A (zh) * | 2017-11-24 | 2018-05-01 | 网易(杭州)网络有限公司 | 交互控制方法、装置、电子设备及存储介质 |
US20180167553A1 (en) * | 2016-12-13 | 2018-06-14 | Canon Kabushiki Kaisha | Method, system and apparatus for configuring a virtual camera |
CN109634413A (zh) * | 2018-12-05 | 2019-04-16 | 腾讯科技(深圳)有限公司 | 对虚拟环境进行观察的方法、设备及存储介质 |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
US6283857B1 (en) * | 1996-09-24 | 2001-09-04 | Nintendo Co., Ltd. | Three-dimensional image processing apparatus with enhanced automatic and user point of view control |
US10151599B1 (en) * | 2003-03-13 | 2018-12-11 | Pamala Meador | Interactive virtual reality tour |
GB2462095A (en) * | 2008-07-23 | 2010-01-27 | Snell & Wilcox Ltd | Processing of images to represent a transition in viewpoint |
JP6085411B2 (ja) | 2011-06-02 | 2017-02-22 | 任天堂株式会社 | 画像処理装置、画像処理方法、および画像処理装置の制御プログラム |
US20130303247A1 (en) * | 2012-05-08 | 2013-11-14 | Mediatek Inc. | Interaction display system and method thereof |
US8979652B1 (en) * | 2014-03-27 | 2015-03-17 | TECHLAND Sp. z o. o | Natural movement in a virtual environment |
JP2014184300A (ja) | 2014-04-28 | 2014-10-02 | Copcom Co Ltd | ゲームプログラム、及びゲーム装置 |
US10600245B1 (en) * | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
JP2016171989A (ja) * | 2015-03-16 | 2016-09-29 | 株式会社スクウェア・エニックス | プログラム、記録媒体、情報処理装置及び制御方法 |
KR20160128119A (ko) * | 2015-04-28 | 2016-11-07 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어방법 |
WO2017180990A1 (en) * | 2016-04-14 | 2017-10-19 | The Research Foundation For The State University Of New York | System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data |
WO2018020735A1 (ja) * | 2016-07-28 | 2018-02-01 | 株式会社コロプラ | 情報処理方法及び当該情報処理方法をコンピュータに実行させるためのプログラム |
US10372970B2 (en) * | 2016-09-15 | 2019-08-06 | Qualcomm Incorporated | Automatic scene calibration method for video analytics |
CN106237616A (zh) * | 2016-10-12 | 2016-12-21 | 大连文森特软件科技有限公司 | 基于在线可视化编程的vr器械格斗类游戏制作体验系统 |
CN106600709A (zh) * | 2016-12-15 | 2017-04-26 | 苏州酷外文化传媒有限公司 | 基于装修信息模型的vr虚拟装修方法 |
JP6789830B2 (ja) | 2017-01-06 | 2020-11-25 | 任天堂株式会社 | 情報処理システム、情報処理プログラム、情報処理装置、情報処理方法 |
EP3542252B1 (en) * | 2017-08-10 | 2023-08-02 | Google LLC | Context-sensitive hand interaction |
US10726626B2 (en) * | 2017-11-22 | 2020-07-28 | Google Llc | Interaction between a viewer and an object in an augmented reality environment |
CN108665553B (zh) * | 2018-04-28 | 2023-03-17 | 腾讯科技(深圳)有限公司 | 一种实现虚拟场景转换的方法及设备 |
CN108717733B (zh) * | 2018-06-07 | 2019-07-02 | 腾讯科技(深圳)有限公司 | 虚拟环境的视角切换方法、设备及存储介质 |
US20200285784A1 (en) * | 2019-02-27 | 2020-09-10 | Simulation Engineering, LLC | Systems and methods for generating a simulated environment |
-
2018
- 2018-12-05 CN CN201811478458.2A patent/CN109634413B/zh active Active
-
2019
- 2019-11-05 SG SG11202103706SA patent/SG11202103706SA/en unknown
- 2019-11-05 WO PCT/CN2019/115623 patent/WO2020114176A1/zh active Application Filing
- 2019-11-05 KR KR1020217006432A patent/KR102693824B1/ko active IP Right Grant
- 2019-11-05 JP JP2021514085A patent/JP7191210B2/ja active Active
-
2021
- 2021-02-19 US US17/180,018 patent/US11783549B2/en active Active
-
2023
- 2023-07-13 US US18/351,780 patent/US20230360343A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1595584A1 (en) * | 2004-05-11 | 2005-11-16 | Sega Corporation | Image processing program, game information processing program and game information processing apparatus |
CN105278676A (zh) * | 2014-06-09 | 2016-01-27 | 伊默森公司 | 基于视角和/或接近度修改触觉强度的可编程触觉设备和方法 |
US20180167553A1 (en) * | 2016-12-13 | 2018-06-14 | Canon Kabushiki Kaisha | Method, system and apparatus for configuring a virtual camera |
CN107977141A (zh) * | 2017-11-24 | 2018-05-01 | 网易(杭州)网络有限公司 | 交互控制方法、装置、电子设备及存储介质 |
CN109634413A (zh) * | 2018-12-05 | 2019-04-16 | 腾讯科技(深圳)有限公司 | 对虚拟环境进行观察的方法、设备及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112843716A (zh) * | 2021-03-17 | 2021-05-28 | 网易(杭州)网络有限公司 | 虚拟物体提示与查看方法、装置、计算机设备及存储介质 |
CN112843716B (zh) * | 2021-03-17 | 2024-06-11 | 网易(杭州)网络有限公司 | 虚拟物体提示与查看方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US11783549B2 (en) | 2023-10-10 |
SG11202103706SA (en) | 2021-05-28 |
CN109634413A (zh) | 2019-04-16 |
US20230360343A1 (en) | 2023-11-09 |
KR20210036392A (ko) | 2021-04-02 |
JP7191210B2 (ja) | 2022-12-16 |
JP2021535806A (ja) | 2021-12-23 |
US20210201591A1 (en) | 2021-07-01 |
KR102693824B1 (ko) | 2024-08-08 |
CN109634413B (zh) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151773B2 (en) | Method and apparatus for adjusting viewing angle in virtual environment, and readable storage medium | |
US11703993B2 (en) | Method, apparatus and device for view switching of virtual environment, and storage medium | |
US11224810B2 (en) | Method and terminal for displaying distance information in virtual scene | |
US11471768B2 (en) | User interface display method and apparatus, device and computer-readable storage medium | |
WO2020114176A1 (zh) | 对虚拟环境进行观察的方法、设备及存储介质 | |
CN109529319B (zh) | 界面控件的显示方法、设备及存储介质 | |
US11628371B2 (en) | Method, apparatus, and storage medium for transferring virtual items | |
CN112494955B (zh) | 虚拟对象的技能释放方法、装置、终端及存储介质 | |
US11766613B2 (en) | Method and apparatus for observing virtual item in virtual environment and readable storage medium | |
CN111921197B (zh) | 对局回放画面的显示方法、装置、终端及存储介质 | |
WO2020151594A1 (zh) | 视角转动的方法、装置、设备及存储介质 | |
US11790607B2 (en) | Method and apparatus for displaying heat map, computer device, and readable storage medium | |
CN110496392B (zh) | 虚拟对象的控制方法、装置、终端及存储介质 | |
CN109806583B (zh) | 用户界面显示方法、装置、设备及系统 | |
CN111589141A (zh) | 虚拟环境画面的显示方法、装置、设备及介质 | |
US12061773B2 (en) | Method and apparatus for determining selected target, device, and storage medium | |
JP2024509064A (ja) | 位置マークの表示方法及び装置、機器並びにコンピュータプログラム | |
WO2022237076A1 (zh) | 虚拟对象的控制方法、装置、设备及计算机可读存储介质 | |
CN112057861B (zh) | 虚拟对象控制方法、装置、计算机设备及存储介质 | |
CN111754631A (zh) | 三维模型的生成方法、装置、设备及可读存储介质 | |
CN113318443A (zh) | 基于虚拟环境的侦察方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19894253 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217006432 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021514085 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19894253 Country of ref document: EP Kind code of ref document: A1 |