WO2023065949A1 - 虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品 - Google Patents

虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品 Download PDF

Info

Publication number
WO2023065949A1
WO2023065949A1 PCT/CN2022/120460 CN2022120460W WO2023065949A1 WO 2023065949 A1 WO2023065949 A1 WO 2023065949A1 CN 2022120460 W CN2022120460 W CN 2022120460W WO 2023065949 A1 WO2023065949 A1 WO 2023065949A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
rotation
virtual
rotation operation
reference axis
Prior art date
Application number
PCT/CN2022/120460
Other languages
English (en)
French (fr)
Inventor
杜丹丹
王光欣
陈德魁
李建全
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2023571741A priority Critical patent/JP2024521690A/ja
Publication of WO2023065949A1 publication Critical patent/WO2023065949A1/zh
Priority to US18/206,562 priority patent/US20230310989A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1006Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals having additional degrees of freedom
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the embodiment of this application is based on the application number 202111220651.8, the application date is October 20, 2021, and the title is: a Chinese patent application for object control method, device and terminal equipment in a virtual scene, and the application number is 202111672726.6, and the application date is 2021 On December 31, 2021, a Chinese patent application titled: Object Control Method, Device, and Terminal Equipment in a Virtual Scene was filed.
  • This application requires an application number of 202111220651.8, and the application date is October 20, 2021.
  • the name is: Virtual Scene The priority of the Chinese patent application for the object control method, device and terminal equipment in it, and the application number is 202111672726.6, the application date is December 31, 2021, and the name is: Object control method, device and terminal equipment in virtual scene The priority of the Chinese patent application.
  • the present application relates to computer technology, and in particular to an object control method, device, terminal equipment, computer-readable storage medium and computer program product in a virtual scene.
  • a user controls a virtual object to play a game
  • he usually controls the virtual object to perform gesture conversion by clicking a virtual button displayed on a human-computer interaction interface.
  • buttons on the human-computer interaction interface which are used to adjust the direction of the lens of the virtual scene or the virtual scene. This setting blocks the game screen; while performing actions on the virtual object
  • the user needs to use multiple fingers to perform the pressing operation, and it takes a certain amount of time to select the corresponding button from multiple virtual interaction buttons. The user's operation is difficult, which affects the manipulation of the virtual scene. efficiency.
  • Embodiments of the present application provide an object control method, device, device, computer program product, and computer-readable storage medium in a virtual scene, which can improve the control efficiency of the virtual scene and save computing resources required for displaying virtual buttons.
  • An embodiment of the present application provides an object control method in a virtual scene, the method comprising:
  • the camera of the virtual scene is controlled to rotate around a third rotation reference axis; wherein, the third rotation reference axis is parallel to the height direction of the human-computer interaction interface.
  • An embodiment of the present application provides an object control method in a virtual scene, the method comprising:
  • An embodiment of the present application provides an object control device in a virtual scene, the device comprising:
  • a display module configured to display a virtual scene in a human-computer interaction interface; wherein the virtual scene includes virtual objects;
  • the first control module is configured to control the posture of the virtual object to tilt to the left or right of the virtual object in response to the first rotation operation; wherein, the first reference axis corresponding to the first rotation operation is vertical on the human-computer interaction interface;
  • the second control module is configured to control the lens of the virtual scene to rotate around a second rotation reference axis in response to a second rotation operation; wherein, the second rotation reference axis is parallel to the width direction of the human-computer interaction interface;
  • the third control module is configured to control the lens of the virtual scene to rotate around a third rotation reference axis in response to a third rotation operation; wherein the third rotation reference axis is parallel to the height direction of the human-computer interaction interface.
  • An embodiment of the present application provides an electronic device for object control in a virtual scene, and the electronic device includes:
  • the processor is configured to implement any object control method in the virtual scene provided by the embodiments of the present application when executing the executable instructions stored in the memory.
  • the attitude control of the virtual objects in the virtual scene displayed in the human-computer interaction interface or the lens of the virtual scene is controlled; the traditional key operation control is replaced by the rotation operation
  • the user does not need to use multiple fingers to press the virtual object posture or the camera lens of the virtual scene to realize the virtual object posture control and lens rotation control. Since the buttons set in the human-computer interaction interface are saved, the human-computer interaction is reduced. The occlusion of the interface improves the control efficiency of the virtual scene.
  • FIG. 1A is a schematic diagram of an application mode of an object control method in a virtual scene provided by an embodiment of the present application
  • FIG. 1B is a schematic diagram of an application mode of an object control method in a virtual scene provided by an embodiment of the present application
  • FIG. 3A is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application
  • FIG. 3B is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 3C is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 4B is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 4C is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • Fig. 5 is an axial schematic diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 6A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 6B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 7A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 7B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 8A is an optional flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 8B is an optional flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 8C is an optional schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • FIG. 9B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 9C is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • FIG. 10A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • Fig. 11A is a schematic diagram of the direction of a virtual object in a third-person perspective provided by an embodiment of the present application.
  • FIG. 11B is a schematic diagram of a virtual object direction in a third-person perspective provided by an embodiment of the present application.
  • first ⁇ second ⁇ third is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second ⁇ third” Where permitted, the specific order or sequencing may be interchanged such that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein.
  • a virtual scene is a virtual scene displayed (or provided) when an application program runs on an electronic device.
  • the virtual scene can be a simulated environment of the real world, a semi-simulation and semi-fictional virtual scene, or a purely fictitious virtual scene.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimensions of the virtual scene.
  • the virtual scene may include sky, land, ocean, etc.
  • the land may include environmental elements such as deserts and cities, and the user may control virtual objects to move in the virtual scene.
  • TPS Third-person shooting games
  • first-person shooter games which means that players can observe the characters they operate through the game screen.
  • the difference with first-person shooter games is that only the protagonist's vision is displayed on the screen in first-person shooter games, while in third-person shooter games the protagonist is visible on the game screen.
  • the rotation reference axis is each axis of the space Cartesian coordinate system corresponding to the terminal device, and the rotation reference axes are perpendicular to each other, wherein one axis of the space Cartesian coordinate system is perpendicular to the plane used by the electronic device for human-computer interaction, The plane formed by the other two axes is parallel to the plane used by the electronic device for human-computer interaction.
  • Gyroscope an angular motion detection device, used to detect information such as the angle and angular velocity of rotation around each rotation reference axis.
  • Lens a tool for watching the virtual scene, displays the picture of the virtual scene on the display screen by shooting a part of the virtual scene.
  • the game screen is obtained by shooting a part of the virtual scene through the camera, and the user (for example: a player) can watch the pictures of different areas in the virtual scene by controlling the movement of the camera.
  • embodiments of the present application provide an object control method in a virtual scene, an object control device in a virtual scene, a terminal device, a computer-readable storage medium, and a computer program product.
  • an exemplary implementation scene of the object control method in the virtual scene provided by the embodiment of the present application is first described.
  • the virtual scene can be completely based on the output of the terminal device, or based on The terminal device and the server cooperate to output.
  • FIG. 1A is a schematic diagram of an application mode of an object control method in a virtual scene provided by an embodiment of the present application. Applicable to some application modes that completely rely on the graphics processing hardware computing power of the terminal device 400 to complete the calculation of relevant data of the virtual scene 100, such as stand-alone/offline mode games, through smart phones, tablet computers and virtual reality/augmented reality Various types of terminal devices 400 such as devices complete the output of the virtual scene.
  • the terminal device 400 calculates and displays the required display data through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs a video that can form a visual perception of the virtual scene through the graphics output hardware.
  • frame for example, a two-dimensional video frame is presented on the screen of a smart phone, or a video frame that realizes a three-dimensional display effect is projected on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the terminal device 400 can also use Different hardware to form one or more of auditory perception, tactile perception, motion perception and taste perception.
  • the terminal device 400 runs a stand-alone game application, and outputs a virtual scene including action role-playing during the running of the game application.
  • the virtual scene can be an environment for game characters to interact with, for example, it can be used for game characters Plains, streets, valleys, etc. where battles are carried out; taking the third-person perspective to display a virtual scene as an example, a virtual object is displayed in the virtual scene, and the virtual object is a game character controlled by a real user. : Gyroscope, touch screen, voice-activated switch, keyboard, mouse and joystick, etc.) to move in the virtual scene. For example: when a real user clicks a virtual button on the touch screen, the virtual object will execute the action associated with the virtual button.
  • the terminal device 400 may be various types of mobile terminals, such as a smart phone, a tablet computer, a handheld game terminal, an augmented reality device, a virtual reality device, and the like.
  • a virtual scene is displayed through the display screen of the mobile terminal, the virtual scene includes virtual objects, and a gyroscope is provided in the mobile terminal (the embodiment of the present application does not limit the angular motion detection device to be a gyroscope, when Other angular motion detection devices can implement the solutions of the embodiments of the present application, and other angular motion detection devices can also be used), and the gyroscope is used to detect the rotation operation for the mobile terminal.
  • the three axes in the rotation reference axis corresponding to the mobile terminal correspond to different control methods.
  • the mobile terminal controls the virtual object or the lens of the virtual scene according to the rotation reference axis corresponding to the rotation operation. .
  • the user can control the virtual object to adjust the posture or control the lens of the virtual scene to adjust without clicking the button, so as to improve the control efficiency of the virtual scene.
  • the solution for the cooperative implementation of terminal devices and servers involves two game modes, namely local game mode and cloud game mode.
  • the local game mode means that the terminal device and the server cooperate to run the game processing logic, and the user (for example: player) in
  • the operation instructions input in the terminal device are partly processed by the game logic running on the terminal device, and the other part is processed by the game logic running on the server, and the game logic processing run by the server is often more complicated and requires more computing power;
  • the cloud game mode is It means that the server runs the game logic processing completely, and the cloud server renders the game scene data into audio and video streams, and transmits them to the terminal device for display through the network.
  • the terminal device only needs to have basic streaming media playback capabilities and the ability to obtain user (eg: player) operation instructions and send them to the server.
  • FIG. 1B is a schematic diagram of the application mode of the object control method in the virtual scene provided by the embodiment of the present application, which is applied to the terminal device 400 and the server 200, and is suitable for relying on the computing power of the server 200
  • the calculation of the virtual scene is completed, and the application mode of the virtual scene is output on the terminal device 400 .
  • the terminal device 400 runs a client (such as a game application in the online version), and interacts with other users by connecting to a game server (that is, the server 200), and the terminal device 400 outputs a virtual scene of the game application.
  • the environment for character interaction for example, can be plains, streets, valleys, etc. for game characters to fight against; taking the third-person perspective to display a virtual scene as an example, there are virtual objects displayed in the virtual scene, and the virtual objects are controlled by real users
  • the game character moves in the virtual scene in response to the real user's operation on the controller (such as: gyroscope, touch screen, voice-activated switch, keyboard, mouse and joystick, etc.). For example: when a real user clicks a virtual button on the touch screen, the virtual object will execute the action associated with the virtual button.
  • the terminal device 400 receives the first rotation operation and sends the signal to the server 200, the server 200 tilts the posture of the virtual object according to the signal, and sends the display data representing the posture of the virtual object to the terminal device 400, so that The posture of the terminal device 400 displaying the virtual object to the user is tilted leftward or rightward.
  • the terminal device receives control signals sent by other electronic devices, and controls the virtual objects in the virtual scene according to the control signals.
  • Other electronic devices can be handle devices (for example: wired handle devices, wireless handle devices, wireless remote controllers, etc.) with a gyroscope inside.
  • the handle device receives a rotation operation, it generates a corresponding control signal according to the rotation operation, and sends The control signal is sent to the terminal device, and the terminal device controls the gesture of the virtual object in the virtual scene to tilt to the left or right of the virtual object according to the control signal.
  • Other electronic devices may also be handle devices, such as game pads.
  • the gamepad receives the rotation operation, it generates a corresponding control signal according to the rotation operation and sends the control signal to the terminal device.
  • the terminal device controls the posture of the virtual object in the virtual scene according to the control signal.
  • the left or right direction of the virtual object is tilted. Alternatively, rotate the camera orientation.
  • the terminal device 400 can implement the object control method in the virtual scene provided by the embodiment of the present application by running a computer program.
  • the computer program can be a native program or a software module in the operating system; it can be a local ( Native) application program (APP, APPlication), that is, a program that needs to be installed in the operating system to run, such as a game APP (that is, the above-mentioned client); it can also be a small program, that is, it only needs to be downloaded into the browser environment.
  • the running program it can also be a small game program that can be embedded in any APP.
  • the above-mentioned computer program can be any form of application program, module or plug-in.
  • Cloud technology refers to a kind of trusteeship that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, processing, and sharing. technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. It can form a resource pool and be used on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background service of the technical network system requires a large amount of computing and storage resources.
  • the server 200 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal device 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, and a smart watch, etc., but is not limited thereto.
  • the terminal device 400 and the server 200 may be connected directly or indirectly through wired or wireless communication, which is not limited in this embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application; the terminal device 400 shown in FIG.
  • Various components in the terminal device 400 are coupled together through the bus system 440 .
  • the bus system 440 is used to realize connection and communication among these components.
  • the bus system 440 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 2 .
  • User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers, one or more visual displays.
  • the user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
  • Memory 450 may be removable, non-removable or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 450 optionally includes one or more storage devices located physically remote from processor 410 .
  • Memory 450 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), and the volatile memory can be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 450 described in the embodiment of the present application is intended to include any suitable type of memory.
  • memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • the operating system 451 includes system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., configured to implement various basic services and process hardware-based tasks.
  • hardware-related tasks such as framework layer, core library layer, driver layer, etc.
  • Network communication module 452 configured to reach other computing devices via one or more (wired or wireless) network interfaces 420
  • exemplary network interfaces 420 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.
  • presentation module 453 configured to enable the presentation of information via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430 (e.g., a user interface configured to operate peripherals and display content and information ).
  • output devices 431 e.g., display screens, speakers, etc.
  • user interface 430 e.g., a user interface configured to operate peripherals and display content and information .
  • the input processing module 454 is configured to detect one or more user inputs or interactions from one or more of the input devices 432 and to translate the detected inputs or interactions.
  • the object control device in the virtual scene provided by the embodiment of the present application can be realized by software.
  • FIG. 2 shows the object control device 455 stored in the memory 450 in the virtual scene, which can be a program and Software in the form of plug-ins, etc., including the following software modules: display module 4551, tilt control module 4552, these modules are logical, so any combination or further splitting can be carried out according to the realized functions. It should be pointed out that in Fig. 2 For the convenience of expression, the above modules are shown at one time, but it should not be considered that the object control device 455 in the virtual scene excludes the implementation that may only include the display module 4551 , and the functions of each module will be described below.
  • Fig. 3A is an optional schematic flow chart of the object control method in the virtual scene provided by the embodiment of the present application.
  • the virtual object displayed in the human-computer interaction interface will be rotated around different rotation reference axes.
  • the process of gesture control of the virtual object in the scene is described, and at the same time, the execution subject is the terminal device as an example for description.
  • the object control method in the virtual scene provided by the embodiment of the present application may be executed solely by the terminal device 400 in FIG. 1A , or may be executed cooperatively by the terminal device 400 and the server 200 in FIG. 1B .
  • controlling the posture of the virtual object to tilt to the left or right of the virtual object in step 102 can be executed cooperatively by the terminal device 400 and the server 200, and the server 200 calculates the virtual object posture After displaying the data, return the display data to the terminal device 400 for display.
  • the rotation of the camera lens of the virtual scene around the second rotation reference axis can be executed by the terminal device 400 and the server 200 in cooperation, and the server 200 calculates the lens of the virtual scene After the display data is rotated, the display data is returned to the terminal device 400 for display.
  • controlling the posture of the virtual object to tilt to the left or right of the virtual object in step 102 can be performed by terminal device 400 alone, and the gyroscope of terminal device 400 senses the During the first rotation operation, the virtual object in the virtual scene is controlled to tilt leftward or rightward according to the first rotation operation, and the human-computer interaction interface of the terminal device 400 displays the posture change of the virtual object correspondingly.
  • controlling the posture of the virtual object to tilt to the left or right of the virtual object can be performed by the terminal device 400 and other electronic devices.
  • the electronic device senses the first rotation operation through a built-in gyroscope, and sends a control signal corresponding to the first rotation operation to the terminal device 400, and the terminal device 400 according to The control signal controls the virtual object to tilt leftward or rightward, and the human-computer interaction interface of the terminal device 400 displays the posture change of the virtual object correspondingly.
  • FIG. 3A is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 3A .
  • the method shown in FIG. 3A can be executed by various forms of computer programs run by the terminal device 400, and is not limited to the above-mentioned client, such as the above-mentioned operating system 451, software modules, and scripts. Therefore, the client It should not be regarded as a limitation on the embodiments of the present application.
  • step 101 a virtual scene is displayed on a human-computer interaction interface.
  • the terminal device has graphics computing capability and graphics output capability, and may be a smart phone, tablet computer, virtual reality/augmented reality glasses, etc.
  • the virtual scene is displayed on the human-computer interaction interface of the terminal device , the virtual scene is an environment for game characters to interact, such as plains, streets, valleys, etc. for game characters to fight against;
  • the virtual object can be a game character controlled by a user (or player), that is, the virtual object is controlled
  • the input processing module 454 including touch screen, voice control switch, keyboard, mouse and joystick, gyroscope, etc.
  • step 102 in response to the first rotation operation, the posture of the virtual object is controlled to tilt to the left or right of the virtual object.
  • the first rotation operation is a rotation operation performed on the electronic device around the first rotation reference axis, and the first reference axis corresponding to the first rotation operation is perpendicular to the human-computer interaction interface of the electronic device, wherein the electronic The device and the terminal device executing the object control method in the virtual scene of the embodiment of the present application may be the same device, and the electronic device and the terminal device may also be different devices.
  • FIG. 5 is an axial schematic diagram of the electronic device provided by the embodiment of the present application;
  • FIG. 5 exemplarily shows the case where the electronic device is a mobile terminal, and the display screen of the mobile terminal displays a man-machine interface.
  • the first reference axis of rotation (YAW axis) is perpendicular to the man-machine interface upward (direction pointed by the arrow of reference axis Z0 in Figure 5), and the second reference axis of rotation (ROLL axis) is parallel to the man-machine interface
  • the width direction of the interactive interface (the direction pointed by the arrow of the Y0 axis in Figure 5)
  • the third rotation reference axis (PITCH axis) is parallel to the height direction of the human-computer interaction interface (the direction pointed by the arrow of the X0 axis in Figure 5).
  • the positive direction is the reverse of the direction of viewing the display screen, that is, the arrow of the reference axis Z0 in Figure 5
  • the second rotation reference axis is parallel to the length direction of the human-machine interface, that is, the direction pointed by the arrow of the Y0 axis in Figure 5
  • the third rotation reference axis is parallel to the human-machine interface
  • the width direction of the interactive interface is the direction pointed by the arrow of the X0 axis in FIG. 5 .
  • the left direction or the right direction of the virtual object is determined with reference to the virtual object's own perception, which may be consistent with or opposite to the user's perceived left direction and right direction, as illustrated below.
  • the electronic device and the terminal device are the same device, and the terminal device may be a mobile terminal (such as a smart phone, a tablet computer, a handheld game terminal, an augmented reality device, etc.) with a gyroscope inside.
  • the data sensed by the instrument is used to identify the first rotation operation, and then the posture of the virtual object is controlled in response to the first rotation operation.
  • the virtual object Before the terminal device receives the first rotation operation, the virtual object is in the initial posture.
  • the initial posture of the virtual object is an upright standing posture as an example.
  • FIG. 9C which is The schematic diagram of the virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application; L1 in FIG. 9C is a straight line parallel to the width direction of the human-computer interaction interface, the lens of the virtual scene faces the back of the virtual object, and the current posture of the virtual object 110 is Stand upright.
  • the upright standing posture in FIG. 9C is used as a reference for subsequent explanations of the embodiments of the present application.
  • FIG. 9A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • the terminal device rotates clockwise around the YAW axis.
  • the position of the line L2 is the position of the line L1 before the first rotation operation is performed.
  • the angle Y1 formed by the line L1 and the line L2 is the first rotation operation around the YAW axis.
  • the virtual object 110 is controlled to tilt to the right of the posture of the virtual object.
  • the posture of the virtual object 110 in FIG. 9A is a posture tilted to the right.
  • FIG. 9B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • the terminal device rotates counterclockwise around the YAW axis.
  • the position of the line L2 is the position of the line L1 before the first rotation operation is performed, and the angle Y2 formed by the line L1 and the line L2 is the rotation around the YAW axis during the first rotation operation.
  • Angle According to the first rotation operation, the virtual object 110 is controlled to tilt to the left of the posture of the virtual object. Compared with the upright posture in FIG. 9C , the posture of the virtual object 110 in FIG. 9A is a posture tilted to the left.
  • the electronic device and the terminal device are different devices, and the electronic device may be a handle device with a gyroscope inside (for example: a wired handle device, a wireless handle device, a wireless remote controller, etc.), in response to the handle device
  • the handle device generates a corresponding angular motion signal based on the first rotation operation, and sends the angular motion signal to the terminal device, and the terminal device controls the posture of the virtual object to tilt according to the angular motion signal.
  • the electronic device may also be a wearable device (such as a headset, a helmet, a smart bracelet, etc.) provided with a gyroscope inside, and in response to a first rotation operation for the wearable device, the wearable device generates a corresponding The angular motion signal is sent to the terminal device, and the terminal device controls the posture of the virtual object to tilt according to the angular motion signal.
  • a wearable device such as a headset, a helmet, a smart bracelet, etc.
  • the tilting operation is used to control the virtual posture of the virtual object to tilt along the direction corresponding to the tilting operation, which improves the manipulation efficiency of the virtual object in the virtual scene.
  • the user can control the virtual object to perform multiple combined postures (for example: shooting and tilting the upper body) with fewer pressing operations, which reduces the difficulty of manipulation and saves the cost of the human-computer interface.
  • the space for arranging the virtual buttons saves the computing resources required for displaying the virtual buttons on the human-computer interaction interface, and reduces the occlusion of the human-computer interaction interface.
  • step 103 in response to the second rotation operation, the lens of the virtual scene is controlled to rotate around the second rotation reference axis.
  • the second rotation reference axis is parallel to the width direction of the man-machine interface.
  • the lens of the virtual scene is located in the space of the virtual scene, and the picture of the virtual scene displayed on the human-computer interaction interface of the terminal device is obtained by shooting the content of the virtual scene by the lens of the virtual scene.
  • the second rotation operation is a rotation operation performed by the electronic device around the second rotation reference axis (ROLL axis), and the lens of the virtual scene rotates in the same direction around the second rotation reference axis according to the second rotation operation.
  • the rotation angle of is positively correlated with the rotation angle of the second rotation operation around the second rotation reference axis.
  • the rotation angle of the camera lens of the virtual scene and the rotation angle of the second rotation operation around the second rotation reference axis are constrained by a proportional function, or constrained by a curve function of an upward trend.
  • the second rotation operation is a rotation operation performed on the electronic device around the second rotation reference axis.
  • the implementation object of the above-mentioned second rotation operation is an electronic device.
  • the electronic device and the terminal device performing the steps in FIG. , tablet computer, handheld game terminal, augmented reality device, etc.); the electronic device and the terminal device may also be different devices, which will be described in conjunction with different scenarios below.
  • the terminal device controls the lens of the virtual scene for the second rotation operation of controlling the rotation of the terminal device.
  • FIG. 9C is taken as a schematic diagram of displaying a virtual scene in the human-computer interaction interface before the terminal device receives the second rotation operation.
  • the second rotation operation is that the terminal device rotates counterclockwise around the second rotation reference axis, and the lens of the virtual scene rotates counterclockwise around the second rotation reference axis.
  • the rotation directions are consistent and the rotation angles are positively correlated.
  • the space corresponds to the downward rotation, and the picture that should be displayed as a virtual scene in the human-computer interaction interface moves from the lower boundary of the human-computer interaction interface to the upper boundary to display a new picture, and the picture stops moving when the second rotation operation ends.
  • the positive correlation means that there is a direct ratio between the rotation angle of the camera lens of the virtual scene and the rotation angle of the second rotation operation, or the change trend between the rotation angle of the camera lens of the virtual scene and the rotation angle of the second rotation operation is the same For example, the rotation angle of the second rotation operation increases, and the rotation angle of the lens of the virtual scene increases.
  • the terminal device rotates counterclockwise around the second rotation reference axis (the ROLL axis in FIG. 6A ), and the position of the straight line L3 is where the boundary line L5 on one side of the human-computer interaction interface before the second rotation operation is performed.
  • the rotation angle Y3 corresponding to the second rotation operation is the angle between the boundary line L5 and the straight line L3, and the angle at which the lens of the virtual scene rotates downward corresponding to the space of the virtual scene following the second rotation operation is positively related to the rotation angle Y3.
  • the virtual object 110, a part of the virtual building 120, a part of the door 121 of the virtual building, and the virtual scene ground 130 are displayed in the human-computer interaction interface.
  • FIG. 9C in the screen displayed on the human-computer interaction interface of the terminal device in FIG.
  • the upper boundary of the door 121 of the virtual building is invisible, and the virtual scene floor 130 newly appears.
  • FIG. 9C as a schematic diagram of a virtual scene displayed on the human-computer interaction interface before receiving the second rotation operation.
  • the second rotation operation is that the terminal device rotates clockwise around the second rotation reference axis, and the lens of the virtual scene rotates clockwise around the second rotation reference axis.
  • the rotation directions are consistent and the rotation angles are positively correlated.
  • the space corresponding to the upper rotation, the screen that should be displayed as a virtual scene in the human-computer interaction interface moves from the upper boundary of the human-computer interaction interface to the lower boundary to display a new screen, and the screen stops moving when the second rotation operation ends.
  • FIG. 6B is a schematic diagram of the human-computer interaction interface in the virtual scene provided by the embodiment of the present application; the terminal device rotates clockwise around the second rotation reference axis (the ROLL axis in FIG. 6B), and the position of the straight line L3 is the execution Where the boundary line L5 on one side of the man-machine interface is located before the second rotation operation, the rotation angle Y4 corresponding to the second rotation operation is the angle between the boundary line L5 and the straight line L3.
  • the angle at which the lens of the virtual scene rotates upward corresponding to the space of the virtual scene following the second rotation operation is positively correlated with the rotation angle Y4 .
  • the virtual object 110, the first floor and the second floor of the virtual building 120, and a part of the door 121 of the virtual building are displayed in the human-computer interaction interface.
  • FIG. 9C in the screen displayed on the human-computer interaction interface of the terminal device in FIG. 6B, The lower boundary of the door 121 of the virtual building is invisible, and the window 122 of the second floor of the virtual building newly appears.
  • the electronic device and the terminal device are different devices, and the electronic device may be a handle device with a gyroscope inside (for example: a wired handle device, a wireless handle device, a wireless remote controller, etc.), that is, the handle device is aimed at controlling The second rotation operation of the rotation of the handle device generates a corresponding angular motion signal and sends it to the terminal device, and the terminal device controls the camera lens of the virtual scene to rotate according to the angular motion signal.
  • a handle device with a gyroscope inside for example: a wired handle device, a wireless handle device, a wireless remote controller, etc.
  • the electronic device can also be a wearable device (such as a headset, a helmet, a smart bracelet, etc.) with a gyroscope inside, that is, the wearable device can generate a corresponding angular motion for the second rotation operation that controls the rotation of the wearable device
  • the signal is sent to the terminal device, and the terminal device controls the camera lens of the virtual scene to rotate according to the angular motion signal.
  • the lens of the virtual scene is controlled to tilt along the direction corresponding to the tilt operation through the tilting operation, which improves the control efficiency of the lens of the virtual scene.
  • the rotation of the lens is controlled by tilting operation, which is convenient to show the user pictures of different fields of view in the virtual scene.
  • it reduces the difficulty of manipulation, saves the space for arranging virtual buttons on the human-computer interaction interface, and saves
  • the human-computer interaction interface displays the computing resources required by the virtual keys, reducing the occlusion of the virtual keys on the human-computer interaction interface.
  • step 104 in response to the third rotation operation on the electronic device, the lens of the virtual scene is controlled to rotate around the third rotation reference axis.
  • the electronic device is a terminal device
  • the third rotation reference axis is parallel to the height direction of the human-computer interaction interface of the terminal device.
  • the third rotation operation is a rotation operation performed by the terminal device around the third rotation reference axis (PITCH axis), and the lens of the virtual scene rotates in the same direction around the third rotation reference axis according to the third rotation operation.
  • the rotation angle of is positively correlated with the rotation angle of the third rotation operation around the third rotation reference axis.
  • the rotation angle of the lens of the virtual scene and the rotation angle of the third rotation operation around the third rotation reference axis are constrained by a proportional function, or constrained by a curve function of an upward trend.
  • the third rotation operation is a rotation operation performed on the electronic device around the third rotation reference axis.
  • the implementation object of the above-mentioned third rotation operation is an electronic device.
  • the electronic device and the terminal device performing the steps in FIG. tablet computer, handheld game terminal, augmented reality device, etc.); the electronic device and the terminal device may also be different devices, which will be described in conjunction with different scenarios below.
  • the terminal device controls the lens of the virtual scene for the third rotation operation of controlling the rotation of the terminal device.
  • FIG. 9C is taken as a schematic diagram of displaying a virtual scene in the human-computer interaction interface before the terminal device receives the third rotation operation.
  • the third rotation operation is that the terminal device rotates counterclockwise around the third rotation reference axis, and then the lens of the virtual scene rotates counterclockwise around the third rotation reference axis.
  • the rotation direction is consistent and the rotation angle is positively correlated.
  • the screen that should be displayed as a virtual scene in the human-computer interaction interface moves from the left boundary to the right boundary of the human-computer interaction interface to display a new picture, and then The picture stops moving when the three-rotation operation is completed.
  • the directions of the right boundary and the left boundary of the human-computer interaction interface are determined by the left and right directions perceived by the user facing the human-computer interaction interface.
  • FIG. 7A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application; the electronic device rotates counterclockwise around the second rotation reference axis (the PITCH axis in FIG. 7A), and the position of the straight line L4 is to execute the first rotation.
  • the position of the boundary line L6 on one side of the human-computer interaction interface before the third rotation operation, and the rotation angle Y5 corresponding to the third rotation operation is the angle between the boundary line L6 and the straight line L4.
  • the lens of the virtual scene follows the third rotation operation, and the angle of the leftward rotation perceived by the user facing the human-computer interaction interface in the virtual scene is positively correlated with the rotation angle Y5.
  • the virtual object 110 and a part of the virtual building 120 are displayed in the human-computer interaction interface.
  • the left boundary of the virtual building 120 newly appears in the screen displayed on the human-computer interaction interface in FIG. 7A , and the left side is the left side perceived by the user facing the human-computer interaction interface.
  • the third rotation operation is that the terminal device rotates clockwise around the third rotation reference axis, then the lens of the virtual scene rotates clockwise around the third rotation reference axis.
  • the screen that should be displayed as a virtual scene in the human-computer interaction interface moves from the right boundary to the left boundary of the human-computer interaction interface to display a new picture, and then The picture stops moving when the three-rotation operation is completed.
  • FIG. 7B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application; the electronic device rotates clockwise around the third rotation reference axis (the PITCH axis in FIG. 7B), and the position of the straight line L4 is to execute the first rotation.
  • the position of the boundary line L6 on one side of the human-computer interaction interface before the third rotation operation, and the rotation angle Y6 corresponding to the third rotation operation is the angle between the boundary line L6 and the straight line L4.
  • the camera of the virtual scene follows the third rotation operation, and the angle of the rightward rotation perceived by the user facing the human-computer interaction interface in the virtual scene is positively correlated with the rotation angle Y6.
  • the virtual object 110 and a part of the virtual building 120 are displayed on the human-computer interaction interface.
  • the right side boundary of the virtual building 120 newly appears in the screen displayed on the human-computer interaction interface in FIG. 7B , and the right side is the right side perceived by the user facing the human-computer interaction interface.
  • the electronic device and the terminal device are different devices, and the electronic device may be a handle device with a gyroscope inside (for example: a wired handle device, a wireless handle device, a wireless remote controller, etc.), that is, the handle device is aimed at controlling
  • the third rotation operation of the rotation of the handle device generates a corresponding angular motion signal and sends it to the terminal device, and the terminal device controls the camera lens of the virtual scene to rotate according to the angular motion signal.
  • the electronic device can also be a wearable device (such as earphones, helmets, smart bracelets, etc.) with a gyroscope inside, that is, the wearable device generates corresponding angular motion for the third rotation operation that controls the rotation of the wearable device
  • the signal is sent to the terminal device, and the terminal device controls the camera lens of the virtual scene to rotate according to the angular motion signal.
  • step 102 , step 103 or step 104 may be performed after step 101 .
  • step 101 , step 103 and step 104 There is no execution sequence restriction among step 101 , step 103 and step 104 , and the corresponding step can be executed when the rotation operation corresponding to the step is received.
  • the first rotation operation, the second rotation operation, and the third rotation operation revolve around different rotation reference axes, and the three operations do not interfere with each other, and the three operations can be performed simultaneously or only one or two of them can be performed.
  • the first rotation operation corresponds to controlling the posture of the virtual object
  • the second rotation operation corresponds to camera rotation around the second rotation reference axis
  • the third rotation operation corresponds to camera rotation around the third rotation reference axis. Since each operation corresponds to The rotation reference axes are different, there is no opposite direction in the rotation direction of the lens, and there is no conflict between attitude adjustment and lens adjustment, so the corresponding controls of the three operations can be performed simultaneously.
  • FIG. 3B is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application; each step in FIG. 3B is the same as that in FIG. 3A, for example, in FIG. 3B , after step 101, step 102, step 103, and step 104 are executed in sequence.
  • FIG. 3C is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application; after step S101 , further includes: step 105 , confirming the type of rotation operation for the electronic device.
  • the types of rotation operations include: a first rotation operation, a second rotation operation and a third rotation operation.
  • Step 105 confirms the rotation operation type, and the confirmed result may be: any two of the three rotation operations are being executed; any one of the three rotation operations is being executed; and the three rotation operations are being executed simultaneously. After confirming which rotation operations currently exist, perform the steps corresponding to each rotation operation.
  • step 105 By executing step 105, the type of the rotation operation currently performed can be effectively confirmed, and processing time can be reserved for the electronic device. For example: step 105 confirms that the rotation operation currently performed on the electronic device is the first rotation operation and the third rotation operation. Referring to FIG. 3C, step 102 and step 104 are executed after step 105. Since the second rotation operation is not performed, step 103 does not respond Not implemented. Through the combination of the first rotation operation and the third rotation operation, when the lens rotates around the third rotation reference axis, the posture of the virtual object can be controlled to tilt left or right.
  • the third rotation operation corresponds to counterclockwise rotation around the third rotation reference axis, then the picture displayed as a virtual scene on the human-computer interaction interface moves to the left of the virtual object, and the posture of the virtual object tilts to the left.
  • step 102 can be implemented in the following manner: According to the direction consistent with the first rotation operation around the first rotation reference axis, control at least part of the virtual object including the head to the left of the virtual object or Tilting to the right; as an example, the tilt angles of the parts of the virtual object whose head is downward decrease successively, and are all positively correlated with the rotation angle of the first rotation operation based on the first rotation reference axis.
  • the motion model of the virtual object includes the head, neck, limbs and torso; at least part including the head may be the head, neck, upper limbs, waist and torso above the waist of the virtual object. Alternatively, at least part including the head may be the head, neck, upper limbs, shoulders, and chest of the virtual object.
  • the posture of the virtual object before tilting is taken as the first posture
  • the posture after tilting is taken as the second posture.
  • the first posture may be a posture in which the center of gravity of the head and the center of gravity of the torso are on the same straight line, for example: a standing posture or a squatting posture; the second posture is a posture in which the center of gravity of the head and the center of gravity of the torso are not on the same straight line.
  • a standing posture or a squatting posture For example: left probe posture or right probe posture.
  • Controlling the posture of the virtual object to tilt can be characterized as: switching the posture of the virtual object from the first posture to the second posture, and after the posture of the virtual object is tilted, the second posture is used as the new first posture.
  • FIG. 4A is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • step 102 in response to the first rotation operation on the electronic device, the gesture of the virtual object is controlled to The leftward or rightward tilting of the virtual object can be realized through step 1021 and step 1022 in FIG. 4A .
  • step 1021 when the first rotation operation rotates the virtual object to the left around the first rotation reference axis based on the operation and the angle is greater than the angle threshold, at least part of the virtual object including the head is controlled to rotate to the left of the virtual object. Tilt towards.
  • step 1022 when the angle of the first rotation operation to the right of the virtual object around the first rotation reference axis based on the operation is greater than the angle threshold, at least part of the virtual object including the head is controlled to rotate to the right of the virtual object. Tilt towards.
  • the premise of controlling at least part of the virtual object including the head to tilt to the left or right of the virtual object is that the first rotation operation tilts to the left or right of the virtual object.
  • the angle at which the rotation is made is greater than the angle threshold.
  • the angle threshold may be a value learned according to the rotation operation recording training, so as to better judge whether the user's rotation operation satisfies the premise of performing a leftward or rightward rotation gesture.
  • the angle threshold can be acquired in the following manner: acquiring historical record data for the first rotation operation of the electronic device, wherein the historical record data includes: the rotation of the first rotation operation of the latest preset duration (for example: 7 days) Angle: The frequency of occurrence of different rotation angles is counted, and the rotation angle with the highest frequency of occurrence is used as an angle threshold. Alternatively, each rotation angle is counted, and the median of the rotation angles is used as the angle threshold.
  • FIG. 4B is a schematic flowchart of an object control method in a virtual scene provided by an embodiment of the present application.
  • step 102 in response to the first rotation operation on the electronic device, the gesture of the virtual object is controlled to The leftward or rightward tilting of the virtual object can be realized through step 1023 and step 1024 in FIG. 4B .
  • step 1024 when the first rotation operation rotates the virtual object to the right around the first rotation reference axis based on the operation, the angle is greater than the angle threshold, and the angular velocity is greater than the angular velocity threshold, at least the virtual object including the head is controlled. Partially tilts to the right of the virtual object.
  • the premise of controlling at least part of the virtual object including the head to tilt to the left or right of the virtual object is that the angle of the first rotation operation to the left or right of the virtual object is greater than Angle threshold and the angular velocity is greater than the angular velocity threshold.
  • the angle threshold or the angular velocity threshold may be a preset fixed value, or may be a value determined according to user's historical operation data. For example, to obtain historical operation data for a virtual object, since the user's behavior habit will occasionally change, the operation record of the closest set time or the closest set number of rotation operations can be obtained as the history manipulate data.
  • the historical operation data may include: the rotation direction corresponding to the rotation operation, the rotation angular velocity, and the angle at the start of the operation; based on the historical operation data, the threshold recognition model is invoked to obtain the angle threshold and angular velocity threshold that can be used to identify abnormal operations for virtual objects; Wherein, the threshold recognition model is obtained by training the rotation operation data sample and the response or non-response label of the rotation operation data sample label.
  • Abnormal operations include but are not limited to: the angular velocity of the rotation operation exceeds the angular velocity that the user can achieve, the starting angle difference of the rotation operation is greater than the angle difference corresponding to the user's normal operation, etc.
  • the rotation operation data sample may be a collection of rotation operation data during normal operations of a real user corresponding to the virtual object.
  • the rotation angle corresponding to the rotation operation is greater than the angle threshold, or the rotation angle is greater than the angle threshold and the rotation angular velocity is greater than the angular velocity threshold, and the rotation operation satisfies the condition of controlling the posture of the virtual object to tilt, then the label of the rotation operation is Response, otherwise it is Marked as not responding.
  • the threshold recognition model is a machine learning model.
  • the machine learning model can be a neural network model (such as a convolutional neural network, a deep convolutional neural network, or a fully connected neural network, etc.), a decision tree model, a gradient boosting tree, a multilayer perceptron, and a support vector machine.
  • the embodiment does not specifically limit the type of the machine learning model.
  • step 102 before performing step 102, it may also be confirmed whether the current pose of the virtual object can be tilted in a corresponding direction.
  • the first condition includes: the body part required for the virtual object to tilt based on the current posture is not in a working state.
  • the body parts required for tilting include: the torso above the waist and the head, neck, and upper limbs of the virtual object, or include: the head, neck, chest, shoulders, and upper limbs of the virtual object.
  • the first rotation operation is a leftward rotation of the virtual object around the first rotation reference axis of the electronic device.
  • the current posture is the left probe posture
  • all the body parts required for the left probe are in working state, if the first condition is not met, the current posture cannot perform the left probe again, and the left probe posture is maintained
  • the virtual object’s current posture is right In the probe posture, the body part required for tilting the posture to the left is not in the working state, and the first condition is met, then the posture is executed to tilt to the left of the virtual object
  • the current posture is the driving posture, the upper limbs of the virtual object in the driving posture use
  • the driving is in the working state
  • the current posture does not meet the first condition, and the current posture is maintained
  • the body part required for tilting is used to maintain the current posture and is in the working state.
  • the current posture does not meet the first condition, and the current posture is maintained; when the virtual object is in a squatting posture, a standing posture, or a sitting posture (for example: the virtual object sits on a non-driving position of the virtual vehicle), the current posture does not need to be used to maintain the current posture.
  • step 102 before step 102 is performed, it may also be confirmed whether state decay will be caused when the posture of the virtual object is tilted.
  • step 102 is executed.
  • the second condition includes: there is no factor capable of causing state attenuation of the virtual object in the region.
  • the surrounding area may be a space within a specified radius with the virtual object as the center. In specific implementation, the surrounding area may be divided according to actual needs, which is not limited by this embodiment of the present application.
  • the state attenuation can be life value and combat power attenuation; the factors causing the state attenuation can be enemy virtual objects and virtual props (such as traps or range damage props).
  • prompt information is displayed; where the prompt information is used to represent the virtual object's tilt posture, there will be risks.
  • the prompt information can be displayed to the user in any form such as sound, text or graphics. If the user still wants to perform the tilt gesture after receiving the prompt, he can perform the first rotation operation again, and then when the first rotation operation is received again, perform the steps 102.
  • the following is an example. For example, if there is an enemy virtual object in the area around the virtual object, when the first rotation operation is received, a prompt message will be displayed on the human-computer interaction interface and a prompt voice will be issued to remind the user. The user receives the reminder Afterwards, the posture of the virtual object is still determined to be tilted, and the first rotation operation is performed again. When the first rotation operation is received again, the posture of the virtual object is tilted in a corresponding direction according to the first rotation operation.
  • step 102 before step 102 is performed, it may be determined whether the space where the virtual object is located is sufficient to perform a tilt gesture, so as to prevent the virtual object from passing through the virtual scene and other problems.
  • the third condition includes: in the direction consistent with the rotation of the first rotation operation around the first rotation reference axis in the area, there is no obstruction to the left of the virtual object. Obstacles that slope to or to the right. In a specific implementation, the surrounding areas may be divided according to actual needs, which is not limited in this embodiment of the present application. Obstacles can be walls, trees, stones, etc. in the virtual scene.
  • control modes include: attitude tilt mode, lens rotation mode.
  • the posture tilting mode is a mode in which the virtual object is controlled to tilt through the first rotation operation.
  • the lens rotation mode is a mode in which the lens of the virtual scene is controlled to rotate around the first rotation reference axis through the first rotation operation.
  • the value of the angular velocity of the first rotation operation when the value of the angular velocity of the first rotation operation is in the value space associated with the attitude tilt mode, it is determined to be in the attitude tilt mode, and the execution proceeds to step 102 .
  • the value space associated with the posture tilt mode can be set according to actual needs, or can be obtained according to the user's historical operation data, which is not limited by the embodiments of the present application.
  • the lens of the virtual scene is controlled to rotate around the first rotation reference axis.
  • the value space associated with the lens rotation mode can be set according to actual needs, or can be obtained according to the user's historical operation data, which is not limited by the embodiment of the present application.
  • the first rotation reference axis is perpendicular to the human-computer interaction interface. The embodiment of the present application does not limit the actual position where the first rotation reference axis passes through the human-computer interaction interface.
  • the position where the first rotation reference axis passes through the human-computer interaction interface can be The center position of the computer interaction interface, or the center position of the head of the virtual object.
  • the virtual object maintains a standing posture
  • the value of the angular velocity of the first rotation operation is in the value space associated with the lens rotation mode
  • the first rotation operation is to rotate clockwise around the first rotation reference axis
  • the first The rotation reference axis passes through the human-computer interaction interface from the head of the virtual object
  • the lens of the virtual scene rotates clockwise around the first rotation reference axis, which shows that the posture of the virtual object remains unchanged.
  • a rotation reference axis rotates clockwise, and the rotation angle is positively related to the angle corresponding to the first rotation operation.
  • step 107 it is confirmed that step 102 can be executed.
  • the detection result in step 106 is that when the posture tilt mode is shielded, then the process can go to step 108 .
  • step 108 it is determined to be in the lens rotation mode, and the lens of the virtual scene is controlled to rotate around the first rotation reference axis.
  • the attitude tilt mode has a corresponding setting switch, and when the option of the setting switch is set to on, the attitude tilt mode is turned on.
  • the setting switch corresponding to the attitude tilt mode may be displayed when the first rotation operation is received, or may be displayed in the setting list of the virtual scene.
  • the ON state of the posture tilt mode may be set before receiving the first rotation operation, or may be set on a switch displayed when the first rotation operation is received.
  • the posture tilt mode when the posture tilt mode is confirmed as being on, when the first rotation operation is received, the posture of the virtual object is controlled to tilt to the left or right of the virtual object; when the posture tilt mode is confirmed as In the shielded state, confirm that it is in the lens rotation mode.
  • the lens controlling the virtual scene rotates in the direction of the first rotation reference axis according to the first rotation operation, and the rotation angle is positively related.
  • the object control method in the virtual scene controls the posture of the virtual object in the virtual scene to tilt or controls the lens of the virtual scene to rotate through the rotation operation of the electronic device, and uses the rotation operation to replace the traditional method.
  • Button operation the user does not need to use multiple fingers to press at the same time to control the posture control of the virtual object and the rotation of the lens, which improves the convenience of the user's operation and improves the control efficiency of the virtual scene.
  • the rotation operation is in the same direction as the tilt of the virtual object or the camera rotation of the virtual scene, and the angle is positively correlated, which improves the user's sense of substitution in the virtual scene and brings users a more realistic visual experience.
  • buttons are usually set on the human-computer interaction interface, and each virtual interaction button is associated with different actions of the virtual object or with different rotation directions of the lens of the virtual scene.
  • button operations include but not limited to: click buttons, long press buttons, drag buttons, slide screens, etc.
  • the operation difficulty increases
  • too many virtual buttons increase the occlusion rate of the human-computer interaction interface (on the one hand, the virtual buttons occlude the human-computer interaction interface, on the other hand, when the user presses the virtual button with a finger, it also blocks the surrounding area ), which degrades the user's visual experience.
  • the embodiment of the present application provides an object control method in a virtual scene, by controlling the posture of the virtual object or the lens of the virtual scene through the rotation operation of the electronic device, and for different rotation reference axes, it can The camera rotation of the virtual scene in different directions improves the convenience of operation.
  • FIG. 5 is an axial schematic diagram of the electronic device provided by the embodiment of the present application;
  • the electronic device is a mobile terminal, and the display screen of the mobile terminal displays a human-computer interaction interface, and the mobile terminal is in landscape mode
  • the first rotation reference axis (YAW axis) is perpendicular to the human-computer interface upward (the upper corresponding to the reference axis Z0 in Figure 5)
  • the second rotation reference axis (ROLL axis) is parallel to the width direction of the human-computer interface ( The direction pointed by the arrow of the Y0 axis in FIG.
  • the third rotation reference axis is parallel to the height direction of the human-computer interaction interface (the direction pointed by the arrow of the X0 axis in FIG. 5 ).
  • the positive direction is the opposite of the direction of viewing the display screen, that is, the arrow of the reference axis Z0 in Figure 5
  • the second rotation reference axis is parallel to the length direction of the human-machine interface, that is, the direction pointed by the arrow of the Y0 axis in Figure 5
  • the third rotation reference axis is parallel to the human-machine interface
  • the width direction of the interactive interface is the direction pointed by the arrow of the X0 axis in FIG. 5 .
  • the first, second and third rotation reference axes are perpendicular to each other, but the direction of each reference axis can be set according to actual needs, which is not limited by the embodiments of the present application.
  • FIG. 8A and FIG. 8B are an optional schematic flowchart of the object control method in the virtual scene provided by the embodiment of the present application; refer to FIG. 9A and FIG. 9B , FIG. 9C, FIG. 9A, FIG. 9B, and FIG. 9C are schematic diagrams of virtual scenes displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • step 807A judge whether the virtual object can execute the right probe; if the judgment result of step 807A is yes, execute step 808A: control the current posture of the virtual object to switch to the right probe posture. If the judgment result of step 807A is no, execute step 804A: control the virtual object to maintain the current posture.
  • FIG. 8A the virtual object is controlled to execute the right probe.
  • FIG. 9A and FIG. 9C the visual representation
  • a gyroscope is installed in the electronic device to detect the rotation operation of the electronic device, and the gyroscope detects the rotation angle or angular velocity of the electronic device every frame.
  • the embodiment of the present application takes the angle as an example for illustration, as shown in FIG. 9A and FIG. 9B
  • the electronic device is a mobile phone
  • a virtual scene is displayed in the human-computer interaction interface of the electronic device, and the virtual scene contains a virtual object 110.
  • the lens of the virtual scene in the third-person perspective faces the virtual object 110 as an example for illustration.
  • FIG. 9C is an electronic device and a picture of a virtual scene displayed on the electronic device when no rotation operation is performed.
  • the virtual scene includes a virtual object 110 in an upright standing posture.
  • the gyroscope currently acquires the rotation angle Y1 of the electronic device on the YAW axis.
  • the rotation angle Y1 is greater than the angle threshold Y0
  • the virtual object 110 is controlled to perform a corresponding gesture according to the direction and rotation angle of the first rotation operation. tilt.
  • the electronic device is subjected to the first rotation operation of rotating clockwise around the first rotation reference axis (YAW axis).
  • YAW axis first rotation reference axis
  • the straight line L1 is a straight line parallel to the width direction of the human-computer interface, and the straight line L2 is the first rotation
  • the position of the straight line L1 before the operation, and the angle formed by the two straight lines is the rotation angle Y1 around the YAW axis of the first rotation operation.
  • clockwise rotation corresponds to the right side of the virtual object 110
  • the rotation angle Y1 is greater than Angle threshold Y0
  • the posture of the virtual object 110 is tilted to the right of the virtual object 110, and the center of gravity of the head and the center of gravity of the torso of the virtual object 110 are no longer on the same vertical line after posture tilting.
  • the tilting posture can be Right probe.
  • FIG. 8B includes step 801 : detecting the rotation angle of the electronic device around each rotation reference axis in each frame.
  • Step 802B When it is confirmed that the electronic device rotates to the left of the virtual character around the first rotation reference axis, determine whether the rotation angle is greater than an angle threshold. If the judgment result of step 802B is no, then execute step 804: control the virtual object to maintain the current posture; if the judgment result of step 802B is yes, then execute step 805B: judge whether the virtual object is on the left probe; If yes, execute step 806B to control the virtual object to maintain the left probe.
  • step 807B judge whether the virtual object can execute the left probe; if the judgment result of step 807B is yes, execute step 808B: control the current posture of the virtual object to switch to the left probe posture. If the judgment result of step 807B is no, execute step 804: control the virtual object to maintain the current posture.
  • Fig. 8B the virtual object is controlled to execute the left probe, and the visual performance can refer to Fig. 9B.
  • the electronic device is subjected to a first rotation operation that rotates counterclockwise around the first rotation reference axis (YAW axis).
  • the counterclockwise rotation corresponds to the left side of the virtual object 110
  • the rotation angle is
  • the absolute value of Y2 is greater than the absolute value of the angle threshold Y0
  • the posture of the virtual object 110 is tilted to the left of the virtual object 110, and the center of gravity of the head and the center of gravity of the torso of the virtual object 110 are no longer on the same vertical line after the posture is tilted.
  • the tilt posture can be the left probe.
  • the first rotation operation corresponds to a different control mode, and when the value of the angular velocity or angle of the first rotation operation is in the value space associated with the attitude tilt mode, the attitude tilt control of the virtual object is performed.
  • the posture tilt mode is a mode in which the virtual object is controlled to tilt through the first rotation operation.
  • the lens rotation is controlled.
  • the lens rotation mode is a mode in which the lens of the virtual scene is controlled to rotate around the first rotation reference axis through the first rotation operation.
  • Attitude tilt mode and camera rotation mode can also be turned on or off through switch settings. The lens rotation mode is turned on when the attitude tilt mode is blocked, and the attitude tilt mode is turned on when the lens rotation mode is blocked, or both modes can be blocked at the same time. .
  • FIG. 8C is a schematic flowchart of an optional method for controlling an object in a virtual scene provided by an embodiment of the present application.
  • FIG. 8C includes step 801 : detecting the rotation angle of the electronic device around each rotation reference axis in each frame.
  • Step 802C When the electronic device rotates to the left of the virtual character around the first rotation reference axis, determine whether the value space of the rotation angle is in the value space of the attitude tilt mode. If the judgment result of step 802C is yes, execute step 805C: perform processing in attitude tilt mode; processing in attitude tilt mode can be represented by the flow shown in FIG. 8A or 8B .
  • step 806C judge whether the rotation direction is clockwise; Rotate clockwise; if the judgment result of step 806C is yes, execute step 807C to control the lens of the virtual scene to rotate clockwise around the first rotation reference axis.
  • FIG. 10A is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • the lens rotation mode in FIG. 10A corresponds to step 807C in FIG. 8C.
  • the virtual building 124 is taken as an example for illustration.
  • the virtual building 124 is a one-story house.
  • the following virtual building 124 is the same virtual building.
  • the electronic device receives the first rotation operation clockwise around the first rotation reference axis (YAW axis), the rotation angle is Y7, the posture of the virtual object 110 maintains the original posture, and the virtual object 110 in the human-computer interaction interface
  • the scene rotates clockwise around the first rotation reference axis following the first rotation operation, and the rotation angle is positively correlated with the rotation angle Y7 corresponding to the first rotation operation.
  • the screen display in the human-computer interaction interface is as follows: the virtual building 124 and the virtual object 110 are inclined to the right side of the human-computer interaction interface.
  • the positional relationship between the virtual building 124, the virtual object 110 and the ground or sky in the virtual scene remains unchanged, and only the screen corresponding to the virtual scene is shown as tilted.
  • FIG. 10B is a schematic diagram of a virtual scene displayed in the human-computer interaction interface provided by the embodiment of the present application.
  • the lens rotation mode in FIG. 10B corresponds to step 808C in FIG. 8C.
  • the electronic device is subjected to
  • the first rotation reference axis (YAW axis) is the first rotation operation that rotates counterclockwise
  • the rotation angle is Y8
  • the posture of the virtual object (the virtual object is a standing posture in Figure 10B) remains unaffected by the camera rotation (when the camera rotates, the virtual The center of gravity of the object's head and the center of gravity of the torso are on the same vertical line)
  • the virtual scene in the human-computer interaction interface rotates counterclockwise around the first rotation reference axis following the first rotation operation
  • the rotation angle is the same as that of the first rotation operation
  • the corresponding rotation angle Y8 is positively correlated.
  • the screen display in the human-computer interaction interface is as follows: the virtual building 124 and the virtual object 110 are inclined to the left side of the human-computer interaction interface.
  • the positional relationship between the virtual building 124, the virtual object 110 and the ground or sky in the virtual scene remains unchanged, and only the screen corresponding to the virtual scene is shown as tilted.
  • a third-person perspective in which the camera of the virtual scene is directly behind the virtual object is used as an example for illustration.
  • the camera of the virtual scene may be located in different directions in the third-person perspective.
  • the position where the first rotation reference axis passes through the human-computer interaction interface can be the center of the human-computer interaction interface.
  • the lens of the virtual scene passes through The first rotation reference axis at the center of the human-computer interaction interface rotates in the same direction as the first rotation operation, and the rotation angle is positively correlated with the angle corresponding to the first rotation operation.
  • the software modules in the device 455 may include: a display module 4551 configured to display a virtual scene in a human-computer interaction interface; wherein the virtual scene includes a virtual object; a tilt control module 4552 configured to control the virtual object in response to the first rotation operation The attitude of is tilted to the left or right of the virtual object; wherein, the first reference axis corresponding to the first rotation operation is perpendicular to the human-computer interaction interface.
  • the tilt control module 4552 is further configured to: control at least part of the virtual object including the head to the left of the virtual object according to the direction consistent with the first rotation operation around the first rotation reference axis Or tilt to the right; wherein, the tilt angles of the parts of the virtual object whose head is downward decrease successively, and are all positively correlated with the angle of rotation around the first rotation reference axis based on the first rotation operation.
  • the tilt control module 4552 is further configured to: when the first rotation operation rotates the virtual object around the first rotation reference axis in the left direction by an angle greater than the angle threshold, control the virtual object including the head At least part of the virtual object is tilted to the left of the virtual object; when the angle of the first rotation operation to the right of the virtual object around the first rotation reference axis is greater than the angle threshold, control the virtual object including the head in the At least a portion of the interior is inclined to the right of the virtual object.
  • the tilt control module 4552 is further configured to: when the first rotation operation rotates to the left of the virtual object around the first rotation reference axis based on the operation, the angle is greater than the angle threshold, and the angular velocity is greater than the angular velocity threshold, control At least part of the virtual object including the head tilts to the left of the virtual object; when the first rotation operation is based on the operation, the angle of rotating the virtual object to the right around the first rotation reference axis is greater than the angle threshold, and the angular velocity is greater than When the angular velocity threshold is set, at least part of the virtual object including the head is controlled to tilt to the right of the virtual object.
  • the threshold recognition model is obtained by training the rotation operation data sample and the response or non-response label of the rotation operation data sample label.
  • the tilt control module 4552 before controlling the posture of the virtual object to tilt to the left or right of the virtual object itself, is further configured to: in response to the current posture of the virtual object meeting the first condition, turn to execute The process of controlling the posture of the virtual object to tilt to the left or right of the virtual object; wherein, the first condition includes: the body part required for the virtual object to tilt based on the current posture is not in a working state.
  • the tilt control module 4552 before controlling the posture of the virtual object to tilt to the left or right of the virtual object itself, is further configured to: when the area around the virtual object satisfies the second condition, turn to perform control The posture of the virtual object is processed to be inclined to the left or right of the virtual object.
  • the second condition includes: there is no factor capable of causing state attenuation of the virtual object in the region.
  • the tilt control module 4552 before controlling the posture of the virtual object to tilt to the left or right of the virtual object itself, is further configured to: display a prompt message when the area does not meet the second condition; wherein, There will be risks when the prompt information is used to represent the virtual object's tilting posture; in response to the first rotation operation received again, turn to the process of controlling the virtual object's posture to tilt to the left or right of the virtual object.
  • the tilt control module 4552 is further configured to: control the lens of the virtual scene to rotate according to the direction consistent with the second rotation operation around the second rotation reference axis, wherein the rotation angle of the lens of the virtual scene is the same as The rotation angle of the second rotation operation around the second rotation reference axis is positively related.
  • the lens of the virtual scene is controlled to rotate according to the same direction as the third rotation operation around the third rotation reference axis, wherein the rotation angle of the virtual scene lens is the same as that of the third rotation operation around the third rotation reference axis.
  • the angle of axis rotation is positively related.
  • the tilt control module 4552 before controlling the posture of the virtual object to tilt to the left or right of the virtual object, is further configured to: when the value of the angular velocity of the first rotation operation is in the mode associated with the posture tilt When the value space of , it is determined that it is in the attitude tilt mode, and the process of controlling the attitude of the virtual object to tilt to the left or right of the virtual object is performed; wherein, the attitude tilt mode is to control the virtual object through the first rotation operation. Slanted pattern.
  • the tilt control module 4552 is further configured to: when the value of the angular velocity of the first rotation operation is in the value space associated with the lens rotation mode, determine that it is in the lens rotation mode, and control the lens of the virtual scene to rotate around the second rotation mode.
  • a rotation reference axis rotation wherein, the rotation angle of the lens of the virtual scene is positively related to the rotation angle of the first rotation operation around the first rotation reference axis.
  • the tilt control module 4552 is further configured to: determine that it is in the lens rotation mode, and control the lens of the virtual scene to rotate around the first rotation reference axis; wherein, the rotation angle of the lens of the virtual scene is the same as The rotation angle of the first rotation operation around the first rotation reference axis is positively correlated.
  • the first rotation operation, the second rotation operation, and the third rotation operation are implemented for a terminal device, and the terminal device is used to display a human-computer interaction interface; or, the first rotation operation, the second rotation operation, and the third rotation operation
  • the three-rotation operation is implemented for the wearable device or the handle device, and the wearable device or the handle device is used to send a corresponding control signal to the terminal device, and the terminal device is used to display the human-computer interaction interface.
  • An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the object control method in the virtual scene described above in the embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored.
  • the processor When the executable instructions are executed by the processor, the processor will be caused to execute the virtual scene provided by the embodiment of the present application.
  • the object control method for example, the object control method in the virtual scene as shown in FIG. 3A.
  • the computer-readable storage medium can be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
  • HTML Hyper Text Markup Language
  • the embodiment of the present application controls the attitude of the virtual objects in the virtual scene displayed on the human-computer interaction interface or controls the lens of the virtual scene by performing rotation operations around different rotation reference axes corresponding to the terminal device;
  • the operation replaces the traditional button operation to control the posture of the virtual object or the lens of the virtual scene.
  • the user does not need to use multiple fingers to press the operation at the same time to realize the control of the posture of the virtual object and the rotation of the lens, which improves the convenience of operation and improves the understanding of the virtual scene.
  • Control efficiency saves the buttons set on the human-computer interaction interface, so that the human-computer interaction interface reduces the occlusion of the human-computer interaction interface.
  • Setting the attitude tilt mode and camera rotation mode enriches the types of rotation operations that can be controlled, improves the freedom of operation, and improves the user's visual experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质以及计算机程序产品;方法包括:在人机交互界面中显示虚拟场景,其中,虚拟场景包括虚拟对象;响应于第一旋转操作,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜;第一旋转操作对应的第一基准轴垂直于人机交互界面;响应于第二旋转操作,控制虚拟场景的镜头绕第二旋转基准轴转动;第二旋转基准轴平行于人机交互界面的宽度方向;响应于第三旋转操作,控制虚拟场景的镜头绕第三旋转基准轴转动;第三旋转基准轴平行于人机交互界面的高度方向。

Description

虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品
相关申请的交叉引用
本申请实施例基于申请号为202111220651.8,申请日为2021年10月20日,名称为:虚拟场景中的对象控制方法、装置及终端设备的中国专利申请,以及申请号为202111672726.6、申请日为2021年12月31日,名称为:虚拟场景中的对象控制方法、装置及终端设备的中国专利申请提出,本申请要求申请号为202111220651.8,申请日为2021年10月20日,名称为:虚拟场景中的对象控制方法、装置及终端设备的中国专利申请的优先权,以及要求申请号为202111672726.6、申请日为2021年12月31日,名称为:虚拟场景中的对象控制方法、装置及终端设备的中国专利申请的优先权。
技术领域
本申请涉及计算机技术,尤其涉及一种虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质及计算机程序产品。
背景技术
目前,用户控制虚拟对象进行游戏时,通常是通过点击人机交互界面上显示的虚拟按键来控制虚拟对象进行姿态的转换。
人机交互界面上一般设置有多个虚拟交互按钮,用于关联虚拟对象的多种虚拟姿态或虚拟场景的镜头的方向调整,这种设置对游戏画面造成了遮挡;在同时进行对虚拟对象动作控制与虚拟镜头方向控制的情况下,用户需要使用多个手指执行按压操作,从多个虚拟交互按钮中选择对应的按钮也需要一定的时间,用户的操作难度大,影响了对虚拟场景的操控效率。
发明内容
本申请实施例提供一种虚拟场景中的对象控制方法、装置、设备、计算机程序产品及计算机可读存储介质,能够提升针对虚拟场景的操控效率,节约显示虚拟按钮所需的计算资源。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中的对象控制方法,所述方法包括:
在人机交互界面中显示虚拟场景;其中,所述虚拟场景包括虚拟对象;
响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面;
响应于第二旋转操作,控制所述虚拟场景的镜头绕第二旋转基准轴转动;其中,所述第二旋转基准轴平行于所述人机交互界面的宽度方向;
响应于第三旋转操作,控制所述虚拟场景的镜头绕第三旋转基准轴转动;其中,所述第三旋转基准轴平行于所述人机交互界面的高度方向。
本申请实施例提供一种虚拟场景中的对象控制方法,所述方法包括:
在人机交互界面中显示虚拟场景;其中,所述虚拟场景包括虚拟对象;
响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面。
本申请实施例提供一种虚拟场景中的对象控制装置,所述装置包括:
显示模块,配置为在人机交互界面中显示虚拟场景;其中,所述虚拟场景包括虚拟对象;
第一控制模块,配置为响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面;
第二控制模块,配置为响应于第二旋转操作,控制所述虚拟场景的镜头绕第二旋转基准轴转动;其中,所述第二旋转基准轴平行于所述人机交互界面的宽度方向;
第三控制模块,配置为响应于第三旋转操作,控制所述虚拟场景的镜头绕第三旋转基准轴转动;其中,所述第三旋转基准轴平行于所述人机交互界面的高度方向。
本申请实施例提供一种用于虚拟场景中的对象控制的电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的任意一种虚拟场景中的对象控制方法。
本申请实施例提供一种计算可读存储介质,存储有可执行指令,用于被处理器执行时实现本申请实施例提供的任意一种虚拟场景中的对象控制方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现本申请实施例提供的任意一种虚拟场景中的对象控制方法。
本申请实施例具有以下有益效果:
通过绕终端设备所对应的不同旋转基准轴进行旋转操作,对人机交互界面中显示的虚拟场景内的虚拟对象进行姿态控制或者对虚拟场景的镜头进行控制;通过旋转操作替代传统的按键操作控制虚拟对象姿态或者虚拟场景的镜头,用户无需同时使用多个手指进行按压操作来实现对虚拟对象姿态控制和镜头转动控制,由于节约了在人机交互界面设置的按键,从而减少了对人机交互界面的遮挡,提升了对虚拟场景的操控效率。
附图说明
图1A是本申请实施例提供的虚拟场景中的对象控制方法的应用模式示意图;
图1B是本申请实施例提供的虚拟场景中的对象控制方法的应用模式示意图;
图2是本申请实施例提供的终端设备400的结构示意图;
图3A是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图3B是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图3C是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图4A是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图4B是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图4C是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;
图5是本申请实施例提供的电子设备的轴向示意图;
图6A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图6B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图7A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图7B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图8A是本申请实施例提供的虚拟场景中的对象控制方法的一个可选的流程示意图;
图8B是本申请实施例提供的虚拟场景中的对象控制方法的一个可选的流程示意图;
图8C是本申请实施例提供的虚拟场景中的对象控制方法的一个可选的流程示意图;
图9A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图9B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图9C是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图10A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图10B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;
图11A是本申请实施例提供的第三人称视角下虚拟对象方向的示意图;
图11B是本申请实施例提供的第三人称视角下虚拟对象方向的示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
需要指出,在本申请实施例中,涉及到用户信息、用户反馈数据等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
2)虚拟场景,是应用程序在电子设备上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟场景,还可以是纯虚构的虚拟场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
3)虚拟对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如游戏中的各种角色等。
4)第三人称射击类游戏(Third Personal Shooting Game,TPS),指游戏者可以通过游戏画面观察到自己操作的人物。与第一人称射击游戏的区别在于第一人称射击游戏里屏幕上显示的只有主角的视野,而第三人称射击游戏中主角在游戏屏幕上是可见的。
5)旋转基准轴,是终端设备对应的空间直角坐标系的各轴,各旋转基准轴之间互相垂直,其中,空间直角坐标系的一个轴垂直于电子设备用于进行人机交互的平面,另外两轴构成的平面与电子设备用于进行人机交互的平面平行。
6)陀螺仪,角运动检测装置,用于检测绕各旋转基准轴进行旋转的角度、角速度等信息。
7)镜头,观看虚拟场景的工具,通过拍摄虚拟场景的部分区域而在显示屏上显示虚拟场景的画面。以游戏为例,游戏画面是通过镜头拍摄虚拟场景的部分区域而获取的,用户(例如:玩家)通过控制镜头的移动,可以观看到虚拟场景中不同区域的画面。
以虚拟场景为游戏场景为例,在游戏中用户若需要进行对虚拟对象的姿态进行调整,通常是手指按压对应的按键来控制虚拟对象的姿态进行转换;若用户想要对对虚拟场景的镜头方向进行调整,则需要使用手指在人机交互界面上滑动来控制镜头方向。也就是说,人机交互界面上需要设置大量的虚拟按键来关联虚拟对象的各种姿态,对人机交互界面造成了过多的遮挡,导致用户的视觉体验较差,按键数量过多也不便于用户快速选择对应的按键,若用户进行较为复杂操作则需要同时使用多个手指点击按键或者滑动屏幕,提升了操作难度。
针对上述技术问题,本申请实施例提供一种虚拟场景中的对象控制方法、虚拟场景中的对象控制装置、终端设备、计算可读存储介质及计算机程序产品。为便于更容易理解本申请实施例提供的虚拟场景中的对象控制方法,首先说明本申请实施例提供的虚拟场景中对象控制方法的示例性实施场景,虚拟场景可以完全基于终端设备输出,或者基于终端设备和服务器的协同来输出。
本申请实施例中提供的方法可以应用于虚拟现实应用程序、三维地图程序、第一人称射击游戏(First-Person Shooting game,FPS)、第三人称射击游戏、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)等,下文实施例是以在游戏中的应用来举例说明。
下面结合终端设备对应用场景进行介绍。
在一个实施场景中,参考图1A,图1A是本申请实施例提供的虚拟场景中的对象控制方法的应用模式示意图。适用于一些完全依赖于终端设备400的图形处理硬件计算能力即可完成虚拟场景100的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等各种不同类型的终端设备400完成虚拟场景的输出。
当形成虚拟场景100的视觉感知时,终端设备400通过图形计算硬件计算显示所需要的显示数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,终端设备400还可以借助不同的硬件来形成听觉感知、触觉感知、运动感知和味觉感知的一种或多种。
作为一个示例,终端设备400运行单机版的游戏应用,在游戏应用的运行过程中输出包括有动作角色扮演的虚拟场景,虚拟场景可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等;以第三人称视角显示虚拟场景为例,在虚拟场景中显示有虚拟对象,虚拟对象为受控于真实用户的游戏角色,响应于真实用户针对控制器(例如:陀螺仪、触控屏、声控开关、键盘、鼠 标和摇杆等)的操作而在虚拟场景中运动。例如:当真实用户点击触控屏上的虚拟按键,虚拟对象将执行虚拟按键关联的动作。
终端设备400可以为各种类型的移动终端,例如智能手机、平板电脑、掌上游戏终端、增强现实设备、虚拟现实设备等。以移动终端为例,参考图1A,通过移动终端的显示屏显示虚拟场景,虚拟场景包括虚拟对象,移动终端内设置有陀螺仪(本申请实施例并不限制角运动检测装置为陀螺仪,当其他角运动检测装置可以实现本申请实施例的方案时,也可以采用其他的角运动检测装置),陀螺仪用于检测针对移动终端的旋转操作。移动终端所对应的旋转基准轴中的三轴,分别对应于不同的控制方式,在通过陀螺仪接收到旋转操作时,移动终端根据该旋转操作对应的旋转基准轴控制虚拟对象或者虚拟场景的镜头。通过绕不同旋转基准轴进行的旋转操作,使得用户无需进行按键点击,就能够控制虚拟对象进行姿态调整或者控制虚拟场景的镜头进行调整,提升对虚拟场景的操控效率。
在对图1B进行说明之前,首先对终端设备和服务器协同实施的方案涉及的游戏模式进行介绍。针对终端设备和服务器协同实施的方案,涉及两种游戏模式,分别为本地游戏模式和云游戏模式,其中,本地游戏模式是指终端设备和服务器协同运行游戏处理逻辑,用户(例如:玩家)在终端设备中输入的操作指令,部分由终端设备运行游戏逻辑处理,另一部分由服务器运行游戏逻辑处理,并且,服务器运行的游戏逻辑处理往往更复杂,需要消耗更多的算力;云游戏模式是指完全由服务器运行游戏逻辑处理,并由云端服务器将游戏场景数据渲染为音视频流,并通过网络传输至终端设备显示。终端设备只需要拥有基本的流媒体播放能力与获取用户(例如:玩家)的操作指令并发送给服务器的能力。
在另一个实施场景中,参见图1B,图1B是本申请实施例提供的虚拟场景中的对象控制方法的应用模式示意图,应用于终端设备400和服务器200,适用于依赖于服务器200的计算能力完成虚拟场景计算、并在终端设备400输出虚拟场景的应用模式。
以形成虚拟场景100的视觉感知为例,服务器200进行虚拟场景相关显示数据(例如场景数据)的计算并通过网络300发送到终端设备400,终端设备400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端设备400的相应硬件输出,例如使用麦克风形成听觉感知,使用振动马达形成触觉感知等。
作为示例,终端设备400运行客户端(例如网络版的游戏应用),通过连接游戏服务器(即服务器200)与其他用户进行游戏互动,终端设备400输出游戏应用的虚拟场景,虚拟场景可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等;以第三人称视角显示虚拟场景为例,在虚拟场景中显示有虚拟对象,虚拟对象为受控于真实用户的游戏角色,响应于真实用户针对控制器(例如:陀螺仪、触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动。例如:当真实用户点击触控屏上的虚拟按键,虚拟对象将执行虚拟按键关联的动作。
作为示例,终端设备400接收到第一旋转操作并将信号发送至服务器200,服务器200根据信号对虚拟对象的姿态进行倾斜,并将表示虚拟对象的姿态的显示数据下发至终端设备400,使终端设备400向用户显示虚拟对象的姿态向左向或者右向进行倾斜。
在本申请一些实施例中,终端设备接收其他电子设备发送的控制信号,并根据控制信号对虚拟场景中的虚拟对象进行控制。其他电子设备可以为手柄设备(例如:有线手柄设备、无线手柄设备、无线遥控器等)且内部设置有陀螺仪,手柄设备在接收到旋转操作时,根据旋转操作生成相应的控制信号,并将控制信号发送到终端设备,终端设备根据控制信号控制虚拟场景中的虚拟对象的姿态向虚拟对象的左向或者右向进行倾斜。
在本申请一些实施例中,终端设备接收其他电子设备发送的控制信号,并根据控制信号对虚拟场景中的虚拟对象进行控制。其他电子设备可以为可穿戴式设备(例如:耳机、头盔、智能手环等)且内部设置有陀螺仪,可穿戴式设备在接收到旋转操作时,根据旋转操作生成相应的控制信号,并将控制信号发送到终端设备,终端设备根据控制信号控制虚拟场景中的虚拟对象的姿态向虚拟对象的左向或者右向进行倾斜。若其他电子设备为成对的可穿戴式设备,比如蓝牙耳机,可穿戴式设备的左耳部分与右耳部分均设置有陀螺仪。
其他电子设备还可以是手柄设备,例如:游戏手柄。游戏手柄内部设置有陀螺仪,游戏手柄在接收到旋转操作时,根据旋转操作生成相应的控制信号,并将控制信号发送到终端设备,终端设备根据控制信号控制虚拟场景中的虚拟对象的姿态向虚拟对象的左向或者右向进行倾斜。或者,对镜头方向进行旋转。
在一些实施例中,终端设备400可以通过运行计算机程序来实现本申请实施例提供的虚拟场景中的对象控制方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,APPlication),即需要在操作系统中安装才能运行的程序,例如游戏APP(即上述的客户端); 也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
作为示例,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端设备400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、以及智能手表等,但并不局限于此。终端设备400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
参见图2,图2是本申请实施例提供的终端设备400的结构示意图;图2所示的终端设备400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器、一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,配置为实现各种基础业务以及处理基于硬件的任务。
网络通信模块452,配置为经由一个或多个(有线或无线)网络接口420到达其他计算设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等。
呈现模块453,配置为经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,配置为操作外围设备和显示内容和信息的用户接口)。
输入处理模块454,配置为对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景中的对象控制装置可以采用软件方式实现,图2示出了存储在存储器450中的虚拟场景中的对象控制装置455,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块4551、倾斜控制模块4552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,需要指出,在图2中为了方便表达,一次性示出了上述模块,但是不应视为在虚拟场景中的对象控制装置455排除了可以只包括显示模块4551的实施方式,将在下文中说明各个模块的功能。
参见图3A,图3A是本申请实施例提供的虚拟场景中对象控制方法的可选的流程示意图,下面将结合图3A对通过绕不同旋转基准轴进行旋转操作对人机交互界面中显示的虚拟场景内的虚拟对象进行姿态控制的过程进行说明,同时,以执行主体为终端设备为例进行说明。
本申请实施例提供的虚拟场景中的对象控制方法可以由图1A中的终端设备400单独执行,也可以由图1B中的终端设备400和服务器200协同执行。
以终端设备400和服务器200协同执行为例,步骤102中控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜可以由终端设备400和服务器200协同执行,服务器200计算出虚拟对象姿态的显示数据 后,将显示数据返回至终端设备400进行显示,例如,步骤103中虚拟场景的镜头绕第二旋转基准轴转动可以由终端设备400和服务器200协同执行,服务器200计算出虚拟场景的镜头转动的显示数据后,将显示数据返回至终端设备400进行显示。
以终端设备400单独执行为例,步骤102中控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜可以由终端设备400单独执行,在终端设备400的陀螺仪感应到针对终端设备400的第一旋转操作时,控制虚拟场景中的虚拟对象根据第一旋转操作向左向或者右向进行倾斜,终端设备400的人机交互界面对应地显示虚拟对象的姿态变化。
以终端设备400与其他的电子设备(例如:手柄设备、可穿戴设备)协同执行为例,步骤102中控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜可以由终端设备400与其他的电子设备协同执行,响应于针对电子设备的第一旋转操作,电子设备通过内置的陀螺仪感应第一旋转操作,并将第一旋转操作对应的控制信号发送给终端设备400,终端设备400根据控制信号控制虚拟对象向左向或者右向进行倾斜,终端设备400的人机交互界面对应地显示虚拟对象的姿态变化。
下面,以由图1A中的终端设备400(下文简称为终端设备)单独执行本申请实施例提供的虚拟场景中的对象控制方法为例说明。参见图3A,图3A是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图,将结合图3A示出的步骤进行说明。
需要说明的是,图3A示出的方法可以由终端设备400运行的各种形式计算机程序执行,并不局限于上述的客户端,例如上文的操作系统451、软件模块和脚本,因此客户端不应视为对本申请实施例的限定。
在步骤101中,在人机交互界面中显示虚拟场景。
作为示例,终端设备具有图形计算能力以及图形输出能力,可以是智能手机、平板电脑和虚拟现实/增强现实眼镜等,在步骤101以及后续操作中,在终端设备的人机交互界面中显示虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等;虚拟对象可以是受用户(或称玩家)控制的游戏角色,即虚拟对象受控于真实用户,将响应于真实用户针对输入处理模块454(包括触控屏、声控开关、键盘、鼠标和摇杆、陀螺仪等)的操作而在虚拟场景中运动。
在步骤102中,响应于第一旋转操作,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜。
在一些实施例中,第一旋转操作是针对于电子设备的绕第一旋转基准轴进行的旋转操作,第一旋转操作对应的第一基准轴垂直于电子设备的人机交互界面,其中,电子设备与执行本申请实施例的虚拟场景中的对象控制方法的终端设备可以是同一设备,电子设备与终端设备也可以是不同的设备。
以电子设备为参照物,对第一基准轴所处的坐标系进行说明。参考图5,图5是本申请实施例提供的电子设备的轴向示意图;图5中示例性示出电子设备为移动终端的情况,移动终端的显示屏中显示有人机交互界面,在移动终端处于横屏模式下,第一旋转基准轴(YAW轴)垂直于人机交互界面向上(图5中参考轴Z0的箭头所指向的方向),第二旋转基准轴(ROLL轴)平行于人机交互界面的宽度方向(图5中Y0轴的箭头所指向的方向),第三旋转基准轴(PITCH轴)平行于人机交互界面高度方向(图5中X0轴的箭头所指向的方向)。同理,若电子设备处于竖屏模式下,第一旋转基准轴(YAW轴)垂直于人机交互界面,那么正方向为观看显示屏的方向的反向,即图5中参考轴Z0的箭头所指向的方向,第二旋转基准轴(ROLL轴)平行于人机交互界面的长度方向,即图5中Y0轴的箭头所指向的方向,第三旋转基准轴(PITCH轴)平行于人机交互界面的宽度方向,即图5中X0轴的箭头所指向的方向。
这里,虚拟对象的左向或者右向,是以虚拟对象的自身感知为参考而确定的,与用户感知的左向、右向可以一致,也可以相反,下面示例性说明。
作为示例,参考图11A,图11A是本申请实施例提供的第三人称视角下虚拟对象方向的示意图;图11A中用户正面朝向人机交互界面,用户感知的左向、右向如图11A中的参考轴所示。在图11A中,虚拟场景的镜头面向虚拟对象110的背后,虚拟对象所对应的方向如虚拟对象110上方的参考轴所示,在这种情况下,虚拟对象的左向与用户感知的左向是同向,虚拟对象的右向与用户感知的右向是同向。
作为示例,参考图11B,图11B是本申请实施例提供的第三人称视角下虚拟对象方向的示意图。图11B中用户正面朝向人机交互界面,用户感知的左向、右向如图11B中的参考轴所示。在图11B中,虚拟场景的镜头面向虚拟对象110的正面,虚拟对象所对应的方向如虚拟对象110上方的参考轴所示,在这种情况下,虚拟对象的左向与用户感知的左向是相反的,虚拟对象的右向也与用户感知的右向是相反的。
如前所述,电子设备与终端设备可以是同一设备,终端设备可以是内部设置有陀螺仪的移动终端(例如:智能手机、平板电脑、掌上游戏终端、增强现实设备等);电子设备与终端设备也可以是不同的设备,下面结合不同的场景进行说明。
在一些实施例中,电子设备与终端设备是同一设备,终端设备可以是内部设置有陀螺仪的移动终端(例如:智能手机、平板电脑、掌上游戏终端、增强现实设备等),终端设备依靠陀螺仪感应的数据来识别第一旋转操作,进而响应于第一旋转操作来控制虚拟对象的姿态。
在终端设备接收到第一旋转操作之前,虚拟对象处于初始姿态,为便于解释说明,本申请实施例中以虚拟对象的初始姿态为直立的站姿为例进行说明,参考图9C,图9C是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;图9C中L1为平行于人机交互界面宽度方向的直线,虚拟场景的镜头面向虚拟对象的背后,虚拟对象110的当前姿态为直立的站姿。将图9C中的直立站姿作为本申请实施例后续的解释说明的参照物。
当终端设备接收到第一旋转操作时,若第一旋转操作为绕YAW轴顺时针进行旋转,参考图9A,图9A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图。在图9A中,终端设备绕YAW轴顺时针进行旋转,直线L2的位置是第一旋转操作执行前直线L1所在的位置,直线L1与直线L2形成的夹角Y1为第一旋转操作绕YAW轴旋转的角度。根据第一旋转操作控制虚拟对象110向虚拟对象的姿势的右向进行倾斜,相较于图9C中的直立站姿,图9A中虚拟对象110的姿态为向右倾斜的姿态。
当终端设备接收到第一旋转操作时,若第一旋转操作为绕YAW轴逆时针进行旋转,参考图9B,图9B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图。在图9B中终端设备绕YAW轴逆时针进行旋转,直线L2的位置是第一旋转操作执行前直线L1所在的位置,直线L1与直线L2形成的夹角Y2为第一旋转操作绕YAW轴旋转的角度。根据第一旋转操作控制虚拟对象110向虚拟对象的姿势的左向进行倾斜,相较于图9C中的直立站姿,图9A中虚拟对象110的姿态为向左倾斜的姿态。
在另一些实施例中,电子设备与终端设备是不同设备,电子设备可以是内部设置有陀螺仪的手柄设备(例如:有线手柄设备、无线手柄设备、无线遥控器等),响应于针对手柄设备的第一旋转操作,手柄设备基于第一旋转操作生成对应的角运动信号,并将角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟对象的姿态进行倾斜。
电子设备还可以是内部设置有陀螺仪的可穿戴式设备(例如:耳机、头盔、智能手环等),响应于针对可穿戴设备的第一旋转操作,可穿戴设备基于第一旋转操作生成对应的角运动信号,并将角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟对象的姿态进行倾斜。
本申请实施例,通过倾斜操作控制虚拟对象的虚拟姿态随倾斜操作对应的方向进行倾斜,提升了针对虚拟场景中的虚拟对象的操控效率。相较于通过虚拟按键控制虚拟对象姿态的方式,用户可以通过更少的按压操作控制虚拟对象执行多种组合姿态(例如:射击并倾斜上半身),减轻了操控难度,节约了人机交互界面的布置虚拟按键的空间,节约了人机交互界面显示虚拟按键所需的计算资源,减少了对人机交互界面的遮挡。
在步骤103中,响应于第二旋转操作,控制虚拟场景的镜头绕第二旋转基准轴转动。
这里,第二旋转基准轴平行于人机交互界面的宽度方向。
示例的,虚拟场景的镜头位于虚拟场景的空间内,终端设备的人机交互界面显示的虚拟场景的画面是虚拟场景的镜头对虚拟场景的内容拍摄得到的。
这里,第二旋转操作是电子设备绕第二旋转基准轴(ROLL轴)进行的旋转操作,虚拟场景的镜头根据第二旋转操作绕第二旋转基准轴旋转一致的方向进行转动,虚拟场景的镜头的转动角度与第二旋转操作绕第二旋转基准轴旋转的角度正相关。
作为示例,虚拟场景的镜头的转动角度与第二旋转操作绕第二旋转基准轴旋转的角度之间通过正比例函数约束,或者通过上升趋势的曲线函数进行约束。
第二旋转操作是针对电子设备的绕第二旋转基准轴进行的旋转操作。上述的第二旋转操作的实施对象是电子设备,电子设备与执行图1A或1B各步骤的终端设备可以是同一设备,此时终端设备可以是内部设置有陀螺仪的移动终端(例如:智能手机、平板电脑、掌上游戏终端、增强现实设备等);电子设备与终端设备也可以是不同的设备,下面结合不同的场景进行说明。
在一些实施例中,终端设备针对控制终端设备旋转的第二旋转操作来控制虚拟场景的镜头。参考图9C,将图9C作为在终端设备接收到第二旋转操作之前的人机交互界面中显示虚拟场景的示意图。
例如:第二旋转操作是终端设备绕第二旋转基准轴逆时针转动,虚拟场景的镜头绕第二旋转基准轴逆时针转动,转动方向一致且转动角度正相关,虚拟场景的镜头向虚拟场景的空间所对应的下方转动,人机交互界面中应显示为虚拟场景的画面从人机交互界面的下边界向上边界为方向移动显示新画面,并在第二旋转操作结束时画面停止移动。
其中,正相关是指虚拟场景的镜头的转动角度与第二旋转操作的转动角度之间呈现正比例,或者,虚拟场景的镜头的转动角度与第二旋转操作的转动角度之间的变化趋势是相同的,例如:第二旋转操作的转动角度增长,虚拟场景的镜头的转动角度增长。
参考图6A,图6A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;本申请实施例中以参照物为虚拟建筑120为例进行说明,下文虚拟建筑120为同一虚拟建筑,虚拟建筑120为一栋两层楼房,在图6A中仅显示了虚拟建筑120的一部分,但跟随虚拟场景的镜头方向变化,终端设备的人机交互界面显示的画面能够展示虚拟建筑120不同的部分。当虚拟场景的镜头挂载于虚拟对象的头部位置, 且虚拟场景的镜头所对应的平面与虚拟场景空间内的竖直方向为垂直关系时,参考图9C,人机交互界面中显示的虚拟场景包括:虚拟对象110与虚拟建筑120的一楼,虚拟建筑120的一楼包括:完整的虚拟建筑的门121。
参考图6A,终端设备绕第二旋转基准轴(图6A中ROLL轴)逆时针转动,直线L3的位置为执行第二旋转操作之前人机交互界面的一侧的边界线L5所在的位置,第二旋转操作对应的转动角度Y3是边界线L5与直线L3之间的夹角,虚拟场景的镜头跟随第二旋转操作向虚拟场景的空间所对应的下方转动的角度与转动角度Y3正相关。人机交互界面中显示虚拟对象110、虚拟建筑120的一部分、虚拟建筑的门121的一部分及虚拟场景地面130,相较于图9C,图6A中的终端设备的人机交互界面显示的画面中虚拟建筑的门121的上边界不可见,新出现了虚拟场景地面130。
继续参考图9C,将图9C作为接收到第二旋转操作之前的人机交互界面中显示虚拟场景的示意图。再例如,第二旋转操作是终端设备绕第二旋转基准轴顺时针转动,虚拟场景的镜头绕第二旋转基准轴顺时针转动,转动方向一致且转动角度正相关,虚拟场景的镜头向虚拟场景的空间所对应的上方转动,人机交互界面中应显示为虚拟场景的画面从人机交互界面的上边界向下边界为方向移动显示新画面,并在第二旋转操作结束时画面停止移动。
参考图6B,图6B是本申请实施例提供的虚拟场景中的人机交互界面的示意图;终端设备绕第二旋转基准轴(图6B中的ROLL轴)顺时针转动,直线L3的位置为执行第二旋转操作之前人机交互界面的一侧的边界线L5所在的位置,第二旋转操作对应的转动角度Y4是边界线L5与直线L3之间的夹角。参考图6B可知,虚拟场景的镜头跟随第二旋转操作向虚拟场景的空间所对应的上方转动的角度与转动角度Y4正相关。人机交互界面中显示虚拟对象110、虚拟建筑120的一楼及二楼、虚拟建筑的门121的一部分,相较于图9C,图6B中的终端设备的人机交互界面显示的画面中,虚拟建筑的门121的下边界不可见,新出现了虚拟建筑二楼的窗户122。
在另一些实施例中,电子设备与终端设备是不同设备,电子设备可以是内部设置有陀螺仪的手柄设备(例如:有线手柄设备、无线手柄设备、无线遥控器等),即手柄设备针对控制手柄设备旋转的第二旋转操作生成对应的角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟场景的镜头进行转动。电子设备还可以是内部设置有陀螺仪的可穿戴式设备(例如:耳机、头盔、智能手环等),即可穿戴式设备针对控制可穿戴式设备旋转的第二旋转操作生成对应的角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟场景的镜头进行转动。
本申请实施例,通过倾斜操作控制虚拟场景的镜头随倾斜操作对应的方向进行倾斜,提升了针对虚拟场景的镜头的操控效率。通过倾斜操作控制镜头进行转动,便于向用户展示虚拟场景中的不同视野的画面,相较于通过虚拟按键控制镜头,减轻了操控难度,节约了人机交互界面的布置虚拟按键的空间,节约了人机交互界面显示虚拟按键所需的计算资源,减少虚拟按键对人机交互界面的遮挡。
在步骤104中,响应于针对电子设备的第三旋转操作,控制虚拟场景的镜头绕第三旋转基准轴转动。
示例的,电子设备是终端设备,第三旋转基准轴平行于终端设备的人机交互界面的高度方向。
这里,第三旋转操作是终端设备绕第三旋转基准轴(PITCH轴)进行的旋转操作,虚拟场景的镜头根据第三旋转操作绕第三旋转基准轴旋转一致的方向进行转动,虚拟场景的镜头的转动角度与第三旋转操作绕第三旋转基准轴旋转的角度正相关。
作为示例,虚拟场景的镜头的转动角度与第三旋转操作绕第三旋转基准轴旋转的角度之间通过正比例函数约束,或者通过上升趋势的曲线函数进行约束。
这里,第三选旋转操作是针对电子设备的绕第三旋转基准轴进行的旋转操作。上述的第三旋转操作的实施对象是电子设备,电子设备与执行图1A或1B各步骤的终端设备可以是同一设备,此时终端设备可以是内部设置有陀螺仪的移动终端(例如:智能手机、平板电脑、掌上游戏终端、增强现实设备等);电子设备与终端设备也可以是不同的设备,下面结合不同的场景进行说明。
在一些实施例中,即终端设备针对控制终端设备旋转的第三旋转操作来控制虚拟场景的镜头。参考图9C,将图9C作为在终端设备接收到第三旋转操作之前的人机交互界面中显示虚拟场景的示意图。例如:第三旋转操作是终端设备绕第三旋转基准轴逆时针转动,进而,虚拟场景的镜头绕第三旋转基准轴逆时针转动,转动方向一致且转动角度正相关,虚拟场景的镜头在虚拟场景中向面对人机交互界面的用户感知的左向转动,人机交互界面中应显示为虚拟场景的画面从人机交互界面的左边界向右边界为方向移动显示新画面,并在第三旋转操作结束时画面停止移动。
这里,人机交互界面的右边界与左边界的方向是由面向人机交互界面的用户感知的左右方向确定的。
参考图7A,图7A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;电子设备绕第二旋转基准轴(图7A中PITCH轴)逆时针转动,直线L4的位置为执行第三旋转操作之前人机交互界面的一侧的边界线L6所在的位置,第三旋转操作对应的转动角度Y5是边界线L6与直线L4之间的夹角。虚拟场景的镜头跟随第三旋转操作在虚拟场景中向面对人机交互界面的用户感知的左向转动的角度与转动 角度Y5正相关。人机交互界面中显示虚拟对象110、虚拟建筑120的一部分。相较于图9C,图7A的人机交互界面显示的画面中新出现了虚拟建筑120的左侧边界,该左侧是面对人机交互界面的用户感知的左侧。
继续参考图9C作为旋转操作之前的人机交互界面的画面。再例如:第三旋转操作是终端设备绕第三旋转基准轴顺时针转动,则虚拟场景的镜头绕第三旋转基准轴顺时针转动,转动方向一致且转动角度正相关,虚拟场景的镜头在虚拟场景中向面对人机交互界面的用户感知的右向转动,人机交互界面中应显示为虚拟场景的画面从人机交互界面的右边界向左边界为方向移动显示新画面,并在第三旋转操作结束时画面停止移动。
参考图7B,图7B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图;电子设备绕第三旋转基准轴(图7B中PITCH轴)顺时针转动,直线L4的位置为执行第三旋转操作之前人机交互界面的一侧的边界线L6所在的位置,第三旋转操作对应的转动角度Y6是边界线L6与直线L4之间的夹角。虚拟场景的镜头跟随第三旋转操作在虚拟场景中向面对人机交互界面的用户感知的右向转动的角度与转动角度Y6正相关。则人机交互界面中显示虚拟对象110、虚拟建筑120的一部分。相较于图9C,图7B的人机交互界面显示的画面中新出现了虚拟建筑120的右侧边界,该右侧是面对人机交互界面的用户感知的右侧。
在另一些实施例中,电子设备与终端设备是不同设备,电子设备可以是内部设置有陀螺仪的手柄设备(例如:有线手柄设备、无线手柄设备、无线遥控器等),即手柄设备针对控制手柄设备旋转的第三旋转操作生成对应的角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟场景的镜头进行转动。电子设备还可以是内部设置有陀螺仪的可穿戴式设备(例如:耳机、头盔、智能手环等),即可穿戴式设备针对控制可穿戴式设备旋转的第三旋转操作生成对应的角运动信号发送至终端设备,终端设备根据角运动信号来控制虚拟场景的镜头进行转动。
参考图3A,在步骤101之后可以执行步骤102、步骤103或者步骤104。步骤101、步骤103与步骤104之间不存在执行次序限制,在接收到步骤对应的旋转操作时,能够执行对应的步骤。
这里,第一旋转操作、第二旋转操作及第三旋转操作所绕的旋转基准轴并不相同,三种操作之间互不干扰,三种操作可以同时进行或者仅进行一种或者两种。第一旋转操作对应于虚拟对象的姿态的控制,第二旋转操作对应于绕第二旋转基准轴进行镜头转动,第三旋转操作对应于绕第三旋转基准轴进行镜头转动,由于各操作对应的旋转基准轴不同,镜头转动方向上不存在异向,姿态调整与镜头调整也不存在冲突,因此三种操作对应的控制能够同时进行。
在一些实施例中,参考图3B,图3B是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;图3B中各步骤与图3A中各步骤内容相同,示例的,图3B中,在步骤101之后,顺次执行步骤102、步骤103、步骤104。
在一些实施例中,参考图3C,图3C是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;步骤S101之后还包括:步骤105,确认针对电子设备的旋转操作的类型。旋转操作的类型包括:第一旋转操作、第二旋转操作及第三旋转操作。步骤105确认旋转操作类型,确认的结果可以为:三种旋转操作中任意两种正在执行;三种旋转操作中任意一种正在执行;三种旋转操作同时执行。在确认当前有哪些旋转操作后,再分别执行各旋转操作对应的步骤。通过执行步骤105,能够有效确认当前执行的旋转操作的类型,为电子设备预留处理时间。例如:步骤105确认当前针对电子设备执行的旋转操作为第一旋转操作与第三旋转操作,参考图3C,步骤105之后执行步骤102与步骤104,由于未进行第二旋转操作,步骤103不响应不执行。通过第一旋转操作与第三旋转操作组合,可以在镜头绕第三旋转基准轴转动时,控制虚拟对象的姿态左向或者右向进行倾斜,若第一旋转操作对应于向虚拟对象的左向倾斜,第三旋转操作对应于绕第三旋转基准轴逆时针旋转,则在人机交互界面上显示为虚拟场景的画面向虚拟对象的左侧移动,虚拟对象的姿态向左倾斜。
在一些实施例中,步骤102可以通过以下方式实现:根据与第一旋转操作绕第一旋转基准轴旋转一致的方向,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向或者右向进行倾斜;作为示例,其中,虚拟对象的头部向下的各部分的倾斜角度依次减小,且均与第一旋转操作基于第一旋转基准轴旋转的角度正相关。
作为示例,虚拟对象的运动模型包括头部、颈部、四肢及躯干;包括头部在内的至少部分可以是虚拟对象的头部、颈部、上肢、腰部及腰部以上的躯干部分。或者,包括头部在内的至少部分可以是虚拟对象的头部、颈部、上肢、肩部及胸部。为便于解释,以下将虚拟对象进行倾斜前的姿态作为第一姿态,进行倾斜后的姿态作为第二姿态。第一姿态可以是头部重心与躯干重心处于同一直线的姿态,例如:站姿或者蹲姿;第二姿态是头部重心与躯干重心不处于同一直线的姿态。例如:左探头姿态或者右探头姿态。控制虚拟对象的姿态进行倾斜,可以表征为:将虚拟对象的姿态由第一姿态切换至第二姿态,在虚拟对象的姿态进行倾斜后,将第二姿态作为新的第一姿态。
在一些实施例中,参见图4A,图4A是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图,步骤102中响应于针对电子设备的第一旋转操作,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜,可以通过图4A中的步骤1021及步骤1022实现。
在步骤1021中,当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的左向进行旋转的角度大于角度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向进行倾斜。
在步骤1022中,当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的右向进行旋转的角度大于角度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的右向进行倾斜。
作为示例,在图4A中,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向或者右向进行倾斜被执行的前提是,第一旋转操作向虚拟对象的左向或者右向进行旋转的角度大于角度阈值。角度阈值可以是根据旋转操作记录训练学习得到的值,以便更好地判断用户的旋转操作是否满足执行姿态左向或者右向旋转的前提。通过学习旋转操作记录,提升了控制虚拟对象的姿态倾斜的准确性,提升了人机交互效率,避免了误操作导致的姿态切换,节约了终端设备的计算资源。
示例的,可以通过以下方式获取角度阈值:获取针对电子设备的第一旋转操作的历史记录数据,其中,历史记录数据包括:最近的预设时长(例如:7天)的第一旋转操作的转动角度;统计不同的转动角度的出现频率,将出现频率最高的转动角度作为角度阈值。或者,统计每个转动角度,将转动角度的中位数作为角度阈值。
在一些实施例中,参见图4B,图4B是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图,步骤102中响应于针对电子设备的第一旋转操作,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜,可以通过图4B中的步骤1023及步骤1024实现。
在步骤1023中,当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的左向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向进行倾斜。
在步骤1024中,当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的右向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的右向进行倾斜。
作为示例,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向或者右向进行倾斜被执行的前提是,第一旋转操作向虚拟对象的左向或者右向进行旋转的角度大于角度阈值且角速度大于角速度阈值。
在一些实施例中,角度阈值或者角速度阈值可以是预先设置的固定值,也可以是根据用户的历史操作数据进行确定的值。举例来说,获取针对虚拟对象的历史操作数据,由于用户的行为习惯偶尔会发生改变,可以获取最接近当前时刻的设定时间内的或者最接近的设定数量的旋转操作的操作记录作为历史操作数据。历史操作数据可以包括:旋转操作对应的旋转方向、旋转角速度、操作起始时的角度;基于历史操作数据调用阈值识别模型,得到能够用于识别针对虚拟对象的异常操作的角度阈值和角速度阈值;其中,阈值识别模型是通过旋转操作数据样本、以及旋转操作数据样本标记的响应或不响应的标签进行训练得到。异常操作包括但不限于:旋转操作的角速度超出用户所能达到的角速度、旋转操作的起始角度差大于用户常规操作对应的角度差等。旋转操作数据样本可以是虚拟对象所对应的真实用户的常规操作时的旋转操作数据的集合。旋转操作所对应的旋转角度大于角度阈值,或者旋转角度大于角度阈值且旋转角速度大于角速度阈值,旋转操作满足执行控制虚拟对象的姿态进行倾斜的条件,则该旋转操作的标签为响应,反之则被标记为不响应。通过上述方式,能够建立贴近于用户习惯的模型,通过该模型确定符合用户习惯的角度阈值与角速度阈值,提升操作的响应率,同时防止异常操作对虚拟对象进行控制的情况发生。
需要说明的是,阈值识别模型是机器学习模型。机器学习模型可以是神经网络模型(例如卷积神经网络、深度卷积神经网络、或者全连接神经网络等)、决策树模型、梯度提升树、多层感知机、以及支持向量机等,本申请实施例对机器学习模型的类型不作具体限定。
在一些实施例中,在执行步骤102之前,还可以确认虚拟对象的当前姿态是否能够进行对应方向倾斜。在虚拟对象的当前姿态满足第一条件时,执行步骤102。其中,第一条件包括:虚拟对象基于当前姿态进行倾斜所需活动的身体部分未处于工作状态。进行倾斜所需的身体部分包括:虚拟对象的腰部以上的躯干部分及头部、颈部、上肢,或者包括:虚拟对象的头部、颈部、胸部、肩部及上肢。
以下进行举例说明,例如:第一旋转操作是电子设备绕第一旋转基准轴向虚拟对象的左向旋转。在当前姿态为左探头姿态时,进行左探头所需的全部身体部分均处于工作状态,不满足第一条件,当前姿态不能再次执行左探头,维持左探头姿态;在虚拟对象的当前姿态为右探头姿态时,姿态向左倾斜所需的身体部分不处于工作状态,满足第一条件,则执行姿态向虚拟对象的左边进行倾斜;在当前姿态为驾驶姿态时,驾驶姿态下虚拟对象的上肢用于执行驾驶,为工作状态,则当前姿态不满足第一条件,维持 当前姿态;在虚拟对象处于奔跑姿态或者趴下姿态时,倾斜所需的身体部分用于维持当前位姿而处于工作状态,则当前姿态不满足第一条件,维持当前姿态;虚拟对象处于蹲姿、站姿、坐姿(例如:虚拟对象坐在虚拟载具的非驾驶位上)时,维持当前位姿不需要使用倾斜所需的身体部分,那么当前姿态满足第一条件,执行左探头。
在一些实施例中,在执行步骤102之前,还可以确认虚拟对象姿态倾斜时会不会造成状态衰减。当虚拟对象周围的区域满足第二条件时,执行步骤102。其中,第二条件包括:在区域内不存在能够对虚拟对象造成状态衰减的因素。周围的区域可以是以虚拟对象为圆心指定半径范围内的空间,具体实施中,周围的区域可以根据实际需求进行划分,本申请实施例不对此构成限制。状态衰减可以是生命值、战斗力衰减;造成状态衰减的因素可以是敌方虚拟对象、虚拟道具(例如:陷阱或者范围性伤害道具)。
在一些实施例中,为提升用户的游戏体验,当虚拟对象周围的区域不满足第二条件时,显示提示信息;其中,提示信息用于表征虚拟对象倾斜姿态时将存在风险。提示信息可以通过声音、文字或者图形等任意形式向用户显示,如果用户在接收到提示后仍然想执行倾斜姿态,可以再次进行第一旋转操作,则在再次接收到第一旋转操作时,执行步骤102。
以下进行举例说明,例如:虚拟对象周围的区域内存在敌方虚拟对象,在接收到第一旋转操作时,向在人机交互界面显示提示信息并发出提示语音对用户进行提醒,用户在接收提醒后仍然决定倾斜虚拟对象的姿态,再次进行了第一旋转操作,再次接收到第一旋转操作时,根据该第一旋转操作对虚拟对象的姿态进行对应方向的倾斜。
在一些实施例中,在执行步骤102之前,可以通过判断虚拟对象所处的空间是否足够执行倾斜姿态,防止虚拟对象在虚拟场景中出现穿模等问题。当虚拟对象周围的区域满足第三条件时,转入步骤102;其中,第三条件包括:在区域内与第一旋转操作绕第一旋转基准轴旋转一致的方向上,不存在阻挡虚拟对象左向或右向倾斜的障碍物。具体实施中,周围的区域可以根据实际需求进行划分,本申请实施例不对此构成限制。障碍物可以为虚拟场景中的墙壁、树木、石块等。
以下进行举例说明,例如:虚拟对象站立在虚拟场景中房屋的墙角处,在接收到绕第一旋转基准轴向虚拟对象左向倾斜的第一旋转操作时,虚拟对象的左向存在障碍物墙壁,不满足第三条件,则不执行控制虚拟对象的姿态左向倾斜的处理,维持当前姿态;虚拟对象蹲在虚拟场景中的树木背后,在接收到绕第一旋转基准轴向虚拟对象左向倾斜的第一旋转操作时,虚拟对象的左向没有障碍物,满足第三条件,则执行控制虚拟对象的姿态左向倾斜的处理。
在一些实施例中,在执行步骤102之前,对第一旋转操作对应的取值空间进行判断,以确认第一旋转操作对应的控制模式。控制模式包括:姿态倾斜模式、镜头转动模式。
作为示例,其中,姿态倾斜模式是通过第一旋转操作控制虚拟对象进行倾斜的模式。镜头转动模式是通过第一旋转操作控制虚拟场景的镜头绕第一旋转基准轴进行镜头转动的模式。
在一些实施例中,当第一旋转操作的角速度的取值处于与姿态倾斜模式关联的取值空间时,确定处于姿态倾斜模式,并转入执行步骤102。与姿态倾斜模式关联的取值空间可以根据实际需求进行设置,也可以根据用户的历史操作数据进行获取,本申请实施例不对此构成限制。
在一些实施例中,当第一旋转操作的角速度的取值处于与镜头转动模式关联的取值空间时,确定处于镜头转动模式,控制虚拟场景的镜头绕第一旋转基准轴旋转。与镜头转动模式关联的取值空间可以根据实际需求进行设置,也可以根据用户的历史操作数据进行获取,本申请实施例不对此构成限制。第一旋转基准轴垂直于人机交互界面,本申请实施例并不限制第一旋转基准轴穿过人机交互界面的实际位置,第一旋转基准轴穿过人机交互界面的位置可以在人机交互界面的中心位置,或者虚拟对象头部的中心位置。
以下进行举例说明,例如:虚拟对象维持站立姿态,第一旋转操作的角速度的取值处于镜头转动模式关联的取值空间,第一旋转操作为绕第一旋转基准轴顺时针进行转动,第一旋转基准轴从虚拟对象的头部穿过人机交互界面,虚拟场景的镜头绕第一旋转基准轴顺时针进行转动,显示为虚拟对象的姿态保持不变,虚拟场景与虚拟对象同步地绕第一旋转基准轴顺时针进行转动,转动角度与第一旋转操作对应的角度正相关。
参考图4C,图4C是本申请实施例提供的虚拟场景中的对象控制方法的流程示意图;图4C中步骤101之后可以执行步骤106。
在步骤106中,检测姿态倾斜模式的状态。在步骤106的检测结果为姿态倾斜模式是开启状态时,则可以转入步骤107。在步骤107中,当姿态倾斜模式的状态是开启状态时,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理。
示例的,在步骤107之后确认可以执行步骤102。在步骤106的检测结果为当姿态倾斜模式被屏蔽时,则可以转入步骤108。在步骤108中,确定处于镜头转动模式,控制虚拟场景的镜头绕第一旋转基准轴进行转动。
在一些实施例中,姿态倾斜模式具有对应的设置开关,在设置开关的选项被设置为开启时,姿态倾斜模式被开启。作为示例,姿态倾斜模式对应的设置开关可以在接收到第一旋转操作时进行显示,也可以在虚拟场景的设置列表中进行显示。姿态倾斜模式的开启状态可以在接收到第一旋转操作之前被设置,也可以在接收到第一旋转操作时显示的开关上进行设置。
在一些实施例中,当姿态倾斜模式被确认为开启状态时,在接收到第一旋转操作时,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜;当姿态倾斜模式被确认为屏蔽状态时,确认处于镜头转动模式,在接收到第一旋转操作时,控制虚拟场景的镜头根据第一旋转操作绕第一旋转基准轴转动的方向进行转动且转动角度正相关。
本申请实施例提供的虚拟场景中的对象控制方法,通过针对电子设备进行的旋转操作,控制虚拟场景中的虚拟对象的姿态进行倾斜或者控制虚拟场景的镜头进行转动,利用旋转操作替代了传统的按键操作,用户无需同时使用多个手指进行按压操作来实现对虚拟对象姿态控制和镜头转动控制,提升用户操作的便利性,提升对虚拟场景的操控效率。同时,旋转操作与虚拟对象的姿态倾斜或虚拟场景的镜头转动的方向相同且角度正相关,提升了用户对虚拟场景的代入感,为用户带来更真实的视觉体验。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
传统的按键操作控制虚拟对象的方案中,通常在人机交互界面上设置多种虚拟交互按钮,各虚拟交互按钮关联虚拟对象的不同动作或者关联虚拟场景的镜头的不同转动方向。用户同时进行虚拟镜头转动与虚拟对象姿态控制的情况下,需要调动多个手指进行按键操作(按键操作包括但不限于:点击按键、长按按键、拖动按键、滑动屏幕等),操作难度上升,且虚拟按键过多提升了对人机交互界面的遮挡率(一方面,虚拟按键对人机交互界面进行了遮挡,另一方面,用户使用手指按压虚拟按键时也会对按键周边区域进行遮挡),使用户的视觉体验下降。
针对上述技术问题,本申请实施例提供了一种虚拟场景中的对象控制方法,通过针对电子设备的旋转操作对虚拟对象的姿态或者虚拟场景的镜头进行控制,针对不同的旋转基准轴,可以进行不同方向上的虚拟场景的镜头转动,提升操作的便利性。
示例的,参考图5,图5是本申请实施例提供的电子设备的轴向示意图;图5中电子设备为移动终端,移动终端的显示屏显示人机交互界面,在移动终端处于横屏模式下,第一旋转基准轴(YAW轴)垂直于人机交互界面向上(图5中参考轴Z0所对应的上方),第二旋转基准轴(ROLL轴)平行于人机交互界面的宽度方向(图5中Y0轴的箭头所指向的方向),第三旋转基准轴(PITCH轴)平行于人机交互界面高度方向(图5中X0轴的箭头所指向的方向)。同理,若电子设备处于竖屏模式下,第一旋转基准轴(YAW轴)垂直于人机交互界面,则正方向为观看显示屏的方向的反向,即图5中参考轴Z0的箭头所指向的方向,第二旋转基准轴(ROLL轴)平行于人机交互界面的长度方向,即图5中Y0轴的箭头所指向的方向,第三旋转基准轴(PITCH轴)平行于人机交互界面的宽度方向,即图5中X0轴的箭头所指向的方向。
第一、第二及第三旋转基准轴之间互相垂直,但各基准轴的方向可以根据实际需求进行设置,本申请实施例并不对此构成限制。
在另一些实施例中,例如:电子设备为可穿戴的虚拟现实设备的情况下,ROLL轴垂直于人机交互界面且穿过人机交互界面向观看人机交互界面的反方向延伸,PITCH轴平行于人机交互界面的宽度方向且向人机交互界面的右侧延伸,YAW轴平行于人机交互界面的高度方向且向人机交互界面的上方延伸。
本申请实施例基于图5中的各旋转基准轴的方向为例进行说明。
以下结合流程图与示意图进行说明,参考图8A图与图8B,图8A及图8B是本申请实施例提供的虚拟场景中的对象控制方法的一个可选的流程示意图;参考图9A、图9B、图9C,图9A、图9B、图9C是本申请实施例提供的人机交互界面中显示虚拟场景的示意图。
参考图8A,图8A中包括步骤801:在显示每帧虚拟场景的图像时,检测电子设备绕各个旋转基准轴进行旋转的旋转角度。步骤802A:在确认电子设备绕第一旋转基准轴向虚拟角色的右向进行旋转时,判断旋转角度是否大于角度阈值。若步骤802A的判断结果为否,则执行步骤804:控制虚拟对象维持当前姿态;若步骤802A的判断结果为是,则执行步骤805A:判断虚拟对象是否处于右探头;若步骤805A的判断结果为是,则执行步骤806A,控制虚拟对象维持右探头。若步骤805A的判断结果为否,则执行步骤807A:判断虚拟对象是否可以执行右探头,若步骤807A的判断结果为是,则执行步骤808A:控制虚拟对象的当前姿态切换为右探头姿态。若步骤807A的判断结果为否,则执行步骤804A:控制虚拟对象维持当前姿态。
图8A中控制虚拟对象执行右探头,视觉表现上可以参考图9A及图9C。
示例的,电子设备中设置有陀螺仪以检测针对电子设备的旋转操作,陀螺仪每帧检测电子设备的旋转角度或角速度,本申请实施例以角度为例进行说明,如图9A、图9B所示,本申请实施例中电子设备 为手机,电子设备的人机交互界面中显示有虚拟场景,虚拟场景中包含虚拟对象110,本申请实施例以第三人称视角下虚拟场景的镜头面对虚拟对象110的背后为例进行说明。
示例的,参考图9C,图9C为没有执行任何旋转操作时,电子设备及电子设备中显示的虚拟场景的画面,虚拟场景中包括虚拟对象110,虚拟对象为直立的站姿。
示例的,参考图9A,陀螺仪当前获取的电子设备在YAW轴的旋转角度Y1,在旋转角度Y1大于角度阈值Y0时,根据第一旋转操作的方向与旋转角度控制虚拟对象110执行对应的姿态倾斜。参考图9A,电子设备受到了绕第一旋转基准轴(YAW轴)顺时针转动的第一旋转操作,图9A中直线L1为平行于人机交互界面宽度方向的直线,直线L2为第一旋转操作前直线L1所处的位置,两直线形成的夹角为第一旋转操作绕YAW轴的旋转角度Y1,在当前镜头方向下,顺时针旋转对应于虚拟对象110的右侧,旋转角度Y1大于角度阈值Y0,虚拟对象110的姿态向虚拟对象110的右向进行倾斜,进行姿态倾斜后虚拟对象110的头部的重心与躯干的重心不再处于同一垂直线,参考图9A,倾斜姿态可以为右探头。若第一旋转操作结束后,电子设备受到了其他旋转操作,且其他旋转操作对应的旋转角度Y1小于角度阈值Y0,虚拟对象110不再保持右探头姿态,恢复为原姿态。在虚拟对象110的初始姿态不满足执行右探头姿态的条件时,即使第一旋转操作的旋转角度Y1大于角度阈值Y0,虚拟对象110的初始姿态也不切换为右探头姿态。例如:虚拟对象110的初始姿态为跑步、游泳、趴下或者驾驶姿态,不满足执行右探头姿态的条件,若此时的第一旋转操作的旋转角度Y1大于角度阈值Y0,无法执行探头姿势。
参考图8B,图8B中包括步骤801:每帧检测电子设备绕各个旋转基准轴进行旋转的旋转角度。步骤802B:在确认电子设备绕第一旋转基准轴向虚拟角色的左向进行旋转时,判断旋转角度是否大于角度阈值。若步骤802B的判断结果为否,则执行步骤804:控制虚拟对象维持当前姿态;若步骤802B的判断结果为是,则执行步骤805B:判断虚拟对象是否处于左探头;若步骤805B的判断结果为是,则执行步骤806B,控制虚拟对象维持左探头。若步骤805B的判断结果为否,则执行步骤807B:判断虚拟对象是否可以执行左探头,若步骤807B的判断结果为是,则执行步骤808B:控制虚拟对象的当前姿态切换为左探头姿态。若步骤807B的判断结果为否,则执行步骤804:控制虚拟对象维持当前姿态。
图8B中控制虚拟对象执行左探头,视觉表现上可以参考图9B。
示例的,参考图9B,电子设备受到了绕第一旋转基准轴(YAW轴)逆时针转动的第一旋转操作,在当前镜头方向下,逆时针旋转对应于虚拟对象110的左侧,旋转角度Y2的绝对值大于角度阈值Y0的绝对值,则虚拟对象110的姿态向虚拟对象110的左向进行倾斜,进行姿态倾斜后虚拟对象110的头部的重心与躯干的重心不再处于同一垂直线,参考图9B,倾斜姿态可以为左探头。
在一些实施例中,第一旋转操作对应于不同的控制模式,在第一旋转操作的角速度或者角度的取值处于姿态倾斜模式关联的取值空间时,进行虚拟对象姿态倾斜的控制。姿态倾斜模式是通过第一旋转操作控制虚拟对象进行倾斜的模式。在第一旋转操作的角速度或者角度的取值处于镜头转动模式关联的取值空间时,进行镜头转动的控制。镜头转动模式是通过第一旋转操作控制虚拟场景的镜头绕第一旋转基准轴进行镜头转动的模式。姿态倾斜模式与镜头转动模式也可以通过开关设置开启或者关闭,在姿态倾斜模式被屏蔽时镜头转动模式被开启,在镜头转动模式被屏蔽时姿态倾斜模式被开启,或者两种模式能同时被屏蔽。
图8C是本申请实施例提供的虚拟场景中的对象控制方法的一个可选的流程示意图。
参考图8C,图8C中包括步骤801:每帧检测电子设备绕各个旋转基准轴进行旋转的旋转角度。步骤802C:在电子设备绕第一旋转基准轴向虚拟角色的左向进行旋转时,判断旋转角度的取值空间是否处于姿态倾斜模式的取值空间。若步骤802C的判断结果为是,则执行步骤805C:执行姿态倾斜模式下的处理;姿态倾斜模式下的处理可以通过图8A或者8B中所示的流程表示。
若步骤802C的判断结果为否,则执行步骤806C:判断旋转方向是否为顺时针方向;若步骤806C的判断结果为否,则执行步骤808C:控制虚拟场景的镜头绕第一旋转基准轴向逆时针方向转动;若步骤806C的判断结果为是,则执行步骤807C,控制虚拟场景的镜头绕第一旋转基准轴向顺时针方向转动。
示例的,对镜头转动模式进行解释说明,参考图10A,图10A是本申请实施例提供的人机交互界面中显示虚拟场景的示意图,图10A中的镜头转动方式对应于图8C中步骤807C,图10A中以虚拟建筑124为参照物为例进行说明,虚拟建筑124为一栋一层平房,下文虚拟建筑124为同一虚拟建筑。在镜头转动模式下,电子设备受到了绕第一旋转基准轴(YAW轴)顺时针转动的第一旋转操作,转动角度为Y7,虚拟对象110的姿态维持原姿态,人机交互界面中的虚拟场景跟随第一旋转操作绕第一旋转基准轴进行顺时针的旋转,且旋转角度与第一旋转操作对应的旋转角度Y7正相关。那么人机交互界面中的画面显示为:虚拟建筑124与虚拟对象110一起向人机交互界面的右侧倾斜。虚拟建筑124、虚拟对象110与虚拟场景中地面或天空之间的位置关系保持不变,仅显示为虚拟场景对应的画面倾斜。
参考图10B,图10B是本申请实施例提供的人机交互界面中显示虚拟场景的示意图,图10B中的镜头转动方式对应于图8C中步骤808C,在镜头转动模式下,电子设备受到了绕第一旋转基准轴(YAW轴) 逆时针转动的第一旋转操作,旋转角度为Y8,虚拟对象的姿态(图10B中虚拟对象为站立姿态)保持不受镜头转动的影响(镜头转动时,虚拟对象的头部的重心与躯干的重心处于同一垂直线上),人机交互界面中的虚拟场景跟随第一旋转操作绕第一旋转基准轴进行逆时针的旋转,且旋转角度与第一旋转操作对应的旋转角度Y8正相关。那么人机交互界面中的画面显示为:虚拟建筑124与虚拟对象110一起向人机交互界面的左侧倾斜。虚拟建筑124、虚拟对象110与虚拟场景中地面或天空之间的位置关系保持不变,仅显示为虚拟场景对应的画面倾斜。
示例的,本申请实施例中以虚拟场景的镜头在虚拟对象的正背后的第三人称视角为例进行说明,但实际运用中,第三人称视角下,虚拟场景的镜头可以位于不同的方向。在虚拟场景的镜头位于虚拟对象的其他方向的情况下,第一旋转基准轴穿过人机交互界面的位置可以为人机交互界面的中心,进行第一旋转操作时,虚拟场景的镜头绕穿过人机交互界面中心位置的第一旋转基准轴进行转动,转动方向与第一旋转操作相同,转动角度与第一旋转操作对应的角度正相关。
下面继续说明本申请实施例提供的虚拟场景中的对象控制装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器440的虚拟场景中的对象控制装置455中的软件模块可以包括:显示模块4551,配置为在人机交互界面中显示虚拟场景;其中,虚拟场景包括虚拟对象;倾斜控制模块4552,配置为响应于第一旋转操作,控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜;其中,第一旋转操作对应的第一基准轴垂直于人机交互界面。
在一些实施例中,倾斜控制模块4552,还配置为:根据与第一旋转操作绕第一旋转基准轴旋转一致的方向,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向或者右向进行倾斜;其中,虚拟对象的头部向下的各部分的倾斜角度依次减小,且均与第一旋转操作基于操作绕第一旋转基准轴旋转的角度正相关。
在一些实施例中,倾斜控制模块4552,还配置为:当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的左向进行旋转的角度大于角度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向进行倾斜;当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的右向进行旋转的角度大于角度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的右向进行倾斜。
在一些实施例中,倾斜控制模块4552,还配置为:当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的左向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的左向进行倾斜;当第一旋转操作基于操作绕第一旋转基准轴向虚拟对象的右向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制虚拟对象中包括头部在内的至少部分向虚拟对象的右向进行倾斜。
在一些实施例中,倾斜控制模块4552,还配置为:获取针对虚拟对象的历史操作数据;基于历史操作数据调用阈值识别模型,得到能够用于识别针对虚拟对象的异常操作的角度阈值和角速度阈值。
其中,阈值识别模型是通过旋转操作数据样本、以及旋转操作数据样本标记的响应或不响应的标签进行训练得到。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象自身的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:响应于虚拟对象的当前姿态满足第一条件,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理;其中,第一条件包括:虚拟对象基于当前姿态进行倾斜所需活动的身体部分未处于工作状态。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象自身的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:当虚拟对象周围的区域满足第二条件,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理。其中,第二条件包括:在区域内不存在能够对虚拟对象造成状态衰减的因素。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象自身的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:当区域不满足第二条件时,显示提示信息;其中,提示信息用于表征虚拟对象倾斜姿态时将存在风险;响应于再次接收的第一旋转操作,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象自身的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:当虚拟对象周围的区域满足第三条件时,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理;其中,第三条件包括:在区域内与第一旋转操作绕第一旋转基准轴旋转一致的方向上,不存在阻挡虚拟对象左向或右向倾斜的障碍物。
在一些实施例中,倾斜控制模块4552,还配置为:根据与第二旋转操作绕第二旋转基准轴旋转一致的方向,控制虚拟场景的镜头进行转动,其中,虚拟场景的镜头的转动角度与第二旋转操作绕第二旋转基准轴旋转的角度正相关。
在一些实施例中,根据与第三旋转操作绕第三旋转基准轴旋转的一致方向,控制虚拟场景的镜头进行转动,其中,虚拟场景的镜头的转动角度与第三旋转操作绕第三旋转基准轴旋转的角度正相关。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:当第一旋转操作的角速度的取值处于与姿态倾斜模式关联的取值空间时,确定处于姿态倾斜模式,并转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理;其中,姿态倾斜模式是通过第一旋转操作控制虚拟对象进行倾斜的模式。
在一些实施例中,倾斜控制模块4552,还配置为:当第一旋转操作的角速度的取值处于与镜头转动模式关联的取值空间时,确定处于镜头转动模式,控制虚拟场景的镜头绕第一旋转基准轴旋转;其中,虚拟场景的镜头的转动角度与第一旋转操作绕第一旋转基准轴旋转的角度正相关。
在一些实施例中,在控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜之前,倾斜控制模块4552,还配置为:检测姿态倾斜模式的状态;其中,姿态倾斜模式的状态是在响应于第一旋转操作而显示的开关上进行设置的,或者是在接收到第一旋转操作之前被设置的。当姿态倾斜模式的状态是开启状态时,转入执行控制虚拟对象的姿态向虚拟对象的左向或右向进行倾斜的处理。
当姿态倾斜模式的状态是屏蔽状态时,倾斜控制模块4552,还配置为:确定处于镜头转动模式,控制虚拟场景的镜头绕第一旋转基准轴进行转动;其中,虚拟场景的镜头的转动角度与第一旋转操作绕第一旋转基准轴旋转的角度正相关。
在一些实施例中,第一旋转操作、第二旋转操作和第三旋转操作是针对终端设备实施的,终端设备用于显示人机交互界面;或者,第一旋转操作、第二旋转操作和第三旋转操作是针对穿戴式设备或手柄设备实施的,穿戴式设备或手柄设备用于向终端设备发送相应的控制信号,终端设备用于显示人机交互界面。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中的对象控制方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景中的对象控制方法,例如,如图3A示出的虚拟场景中的对象控制方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上,本申请实施例通过绕终端设备所对应的不同旋转基准轴进行旋转操作,对人机交互界面中显示的虚拟场景内的虚拟对象进行姿态控制或者对虚拟场景的镜头进行控制;通过旋转操作替代传统的按键操作控制虚拟对象姿态或者虚拟场景的镜头,用户无需同时使用多个手指进行按压操作来实现对虚拟对象姿态控制和镜头转动控制,提升操作的便利性,提升了对虚拟场景的操控效率,另一方面,节约了在人机交互界面设置的按键,使得人机交互界面减少了对人机交互界面的遮挡。设置姿态倾斜模式与镜头转动模式,丰富了旋转操作能够控制的类型,提升操作的自由度,提升用户的视觉体验。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟场景中的对象控制方法,由终端设备执行,所述方法包括:
    在人机交互界面中显示虚拟场景,其中,所述虚拟场景包括虚拟对象;
    响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面;
    响应于第二旋转操作,控制所述虚拟场景的镜头绕第二旋转基准轴转动;其中,所述第二旋转基准轴平行于所述人机交互界面的宽度方向;
    响应于第三旋转操作,控制所述虚拟场景的镜头绕第三旋转基准轴转动;其中,所述第三旋转基准轴平行于所述人机交互界面的高度方向。
  2. 如权利要求1所述的方法,其中,所述控制所述虚拟对象的姿态向所述虚拟对象自身的左向或右向进行倾斜,包括:
    根据与所述第一旋转操作绕所述第一旋转基准轴旋转一致的方向,控制所述虚拟对象中包括头部在内的至少部分向所述虚拟对象的左向或者右向进行倾斜;
    其中,所述虚拟对象的头部向下的各部分的倾斜角度依次减小,且均与所述第一旋转操作绕所述第一旋转基准轴旋转的角度正相关。
  3. 如权利要求2所述的方法,其中,所述控制所述虚拟对象中包括头部在内的至少部分向所述虚拟对象的左向或者右向进行倾斜,包括:
    当所述第一旋转操作绕所述第一旋转基准轴向所述虚拟对象的左向进行旋转的角度大于角度阈值时,控制所述虚拟对象中包括头部在内的至少部分向所述虚拟对象的左向进行倾斜;
    当所述第一旋转操作绕所述第一旋转基准轴向所述虚拟对象的右向进行旋转的角度大于角度阈值时,控制所述虚拟对象中包括头部在内的至少部分向所述虚拟对象的右向进行倾斜。
  4. 如权利要求2所述的方法,其中,所述控制所述虚拟对象中包括头部在内的至少部分向虚拟对象的左向或者右向进行倾斜,包括:
    当所述第一旋转操作绕所述第一旋转基准轴向所述虚拟对象的左向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制所述虚拟对象中包括头部在内的至少部分向虚拟对象的左向进行倾斜;
    当所述第一旋转操作绕所述第一旋转基准轴向所述虚拟对象的右向进行旋转的角度大于角度阈值,且角速度大于角速度阈值时,控制所述虚拟对象中包括头部在内的至少部分向虚拟对象的右向进行倾斜。
  5. 如权利要求4所述的方法,其中,所述方法还包括:
    获取针对所述虚拟对象的历史操作数据;
    基于所述历史操作数据调用阈值识别模型,得到能够用于识别针对所述虚拟对象的异常操作的所述角度阈值和所述角速度阈值;
    其中,所述阈值识别模型是通过旋转操作数据样本、以及所述旋转操作数据样本标记的响应或不响应的标签进行训练得到。
  6. 如权利要求1所述的方法,其中,在所述控制所述虚拟对象的姿态向所述虚拟对象自身的左向或右向进行倾斜之前,所述方法还包括:
    响应于所述虚拟对象的当前姿态满足第一条件,转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理;
    其中,所述第一条件包括:所述虚拟对象基于所述当前姿态进行倾斜所需活动的身体部分未处于工作状态。
  7. 如权利要求1所述的方法,其中,在所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜之前,所述方法还包括:
    当所述虚拟对象周围的区域满足第二条件,转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理;
    其中,所述第二条件包括:在所述区域内不存在能够对所述虚拟对象造成状态衰减的因素。
  8. 如权利要求7所述的方法,其中,在所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜之前,所述方法还包括:
    当所述区域不满足所述第二条件时,显示提示信息;其中,所述提示信息用于表征所述虚拟对象倾斜姿态时将存在风险;
    响应于再次接收的所述第一旋转操作,转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理。
  9. 如权利要求1所述的方法,其中,在控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜之前,所述方法还包括:
    当所述虚拟对象周围的区域满足第三条件时,转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理;
    其中,所述第三条件包括:在所述区域内与所述第一旋转操作绕所述第一旋转基准轴旋转一致的方向上,不存在阻挡所述虚拟对象左向或右向倾斜的障碍物。
  10. 如权利要求1所述的方法,其中,所述控制所述虚拟场景的镜头绕第二旋转基准轴转动,包括:
    根据与所述第二旋转操作绕所述第二旋转基准轴旋转一致的方向,控制所述虚拟场景的镜头进行转动,其中,所述虚拟场景的镜头的转动角度与所述第二旋转操作绕所述第二旋转基准轴旋转的角度正相关。
  11. 如权利要求1所述的方法,其中,所述控制所述虚拟场景的镜头绕所述第三旋转基准轴转动,包括:
    根据与所述第三旋转操作绕所述第三旋转基准轴旋转的一致方向,控制所述虚拟场景的镜头进行转动,其中,所述虚拟场景的镜头的转动角度与所述第三旋转操作绕所述第三旋转基准轴旋转的角度正相关。
  12. 如权利要求1所述的方法,其中,在控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜之前,所述方法还包括:
    当所述第一旋转操作的角速度的取值处于与姿态倾斜模式关联的取值空间时,确定处于所述姿态倾斜模式,并转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理;其中,所述姿态倾斜模式是通过所述第一旋转操作控制所述虚拟对象进行倾斜的模式。
  13. 如权利要求12所述的方法,其中,所述方法还包括:
    当所述第一旋转操作的角速度的取值处于与镜头转动模式关联的取值空间时,确定处于镜头转动模式,控制所述虚拟场景的镜头绕所述第一旋转基准轴旋转;其中,所述虚拟场景的镜头的转动角度与所述第一旋转操作绕所述第一旋转基准轴旋转的角度正相关。
  14. 如权利要求1所述的方法,其中,在控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜之前,所述方法还包括:
    检测姿态倾斜模式的状态;其中,所述姿态倾斜模式的状态是在响应于所述第一旋转操作而显示的开关上进行设置的,或者是在接收到所述第一旋转操作之前被设置的;
    当所述姿态倾斜模式的状态是开启状态时,转入执行所述控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜的处理;
    当所述姿态倾斜模式的状态是屏蔽状态时,所述方法还包括:
    确定处于镜头转动模式,控制所述虚拟场景的镜头绕所述第一旋转基准轴进行转动;其中,所述虚拟场景的镜头的转动角度与所述第一旋转操作绕所述第一旋转基准轴旋转的角度正相关。
  15. 如权利要求1所述的方法,其中,
    所述第一旋转操作、所述第二旋转操作和所述第三旋转操作是针对终端设备实施的,所述终端设备用于显示所述人机交互界面;或者,
    所述第一旋转操作、所述第二旋转操作和所述第三旋转操作是针对穿戴式设备或手柄设备实施的,所述穿戴式设备或所述手柄设备用于向终端设备发送相应的控制信号,所述终端设备用于显示所述人机交互界面。
  16. 一种虚拟场景中的对象控制方法,其中,所述方法包括:
    在人机交互界面中显示虚拟场景;其中,所述虚拟场景包括虚拟对象;
    响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面。
  17. 一种虚拟场景中的对象控制装置,所述装置包括:
    显示模块,配置为在人机交互界面中显示虚拟场景;其中,所述虚拟场景包括虚拟对象;
    第一控制模块,配置为响应于第一旋转操作,控制所述虚拟对象的姿态向所述虚拟对象的左向或右向进行倾斜;其中,所述第一旋转操作对应的第一基准轴垂直于所述人机交互界面;
    第二控制模块,配置为响应于第二旋转操作,控制所述虚拟场景的镜头绕第二旋转基准轴转动;其中,所述第二旋转基准轴平行于所述人机交互界面的宽度方向;
    第三控制模块,配置为响应于第三旋转操作,控制所述虚拟场景的镜头绕第三旋转基准轴转动;其中,所述第三旋转基准轴平行于所述人机交互界面的高度方向。
  18. 一种终端设备,所述终端设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至14任一项或权利要求15所述的虚拟场景中的对象控制方法。
  19. 一种计算可读存储介质,存储有可执行指令,用于被处理器执行时实现权利要求1至14任一项或权利要求15所述的虚拟场景中的对象控制方法。
  20. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至14任一项或权利要求15所述的虚拟场景中的对象控制方法。
PCT/CN2022/120460 2021-10-20 2022-09-22 虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品 WO2023065949A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023571741A JP2024521690A (ja) 2021-10-20 2022-09-22 仮想シーンにおけるオブジェクト制御方法、装置、端末機器及びコンピュータプログラム
US18/206,562 US20230310989A1 (en) 2021-10-20 2023-06-06 Object control method and apparatus in virtual scene, terminal device, computer-readable storage medium, and computer program product

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111220651.8A CN113926187A (zh) 2021-10-20 2021-10-20 虚拟场景中的对象控制方法、装置及终端设备
CN202111220651.8 2021-10-20
CN202111672726.6A CN114053693B (zh) 2021-10-20 2021-12-31 虚拟场景中的对象控制方法、装置及终端设备
CN202111672726.6 2021-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/206,562 Continuation US20230310989A1 (en) 2021-10-20 2023-06-06 Object control method and apparatus in virtual scene, terminal device, computer-readable storage medium, and computer program product

Publications (1)

Publication Number Publication Date
WO2023065949A1 true WO2023065949A1 (zh) 2023-04-27

Family

ID=79280725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120460 WO2023065949A1 (zh) 2021-10-20 2022-09-22 虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品

Country Status (4)

Country Link
US (1) US20230310989A1 (zh)
JP (1) JP2024521690A (zh)
CN (2) CN113926187A (zh)
WO (1) WO2023065949A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113926187A (zh) * 2021-10-20 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景中的对象控制方法、装置及终端设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171335A1 (en) * 2014-12-16 2016-06-16 3Ditize Sl 3d rotational presentation generated from 2d static images
CN106178504A (zh) * 2016-06-27 2016-12-07 网易(杭州)网络有限公司 虚拟对象运动控制方法及装置
CN108245893A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 三维虚拟环境中虚拟对象的姿态确定方法、装置及介质
CN108245887A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子装置及存储介质
WO2020029556A1 (zh) * 2018-08-09 2020-02-13 北京微播视界科技有限公司 自适应平面的方法、装置和计算机可读存储介质
CN111026277A (zh) * 2019-12-26 2020-04-17 深圳市商汤科技有限公司 一种交互控制方法及装置、电子设备和存储介质
CN113926187A (zh) * 2021-10-20 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景中的对象控制方法、装置及终端设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5912289B2 (ja) * 2011-05-24 2016-04-27 任天堂株式会社 情報処理プログラム、情報処理装置、情報処理システム、および情報処理方法
CN103578127B (zh) * 2013-11-13 2016-08-31 北京像素软件科技股份有限公司 一种对象转身操作实现方法及装置
JP6689694B2 (ja) * 2016-07-13 2020-04-28 株式会社バンダイナムコエンターテインメント シミュレーションシステム及びプログラム
CN110045827B (zh) * 2019-04-11 2021-08-17 腾讯科技(深圳)有限公司 虚拟环境中虚拟物品的观察方法、装置及可读存储介质
CN110251936B (zh) * 2019-06-24 2022-12-20 网易(杭州)网络有限公司 游戏中虚拟摄像机的控制方法、设备及存储介质
JP6924799B2 (ja) * 2019-07-05 2021-08-25 株式会社スクウェア・エニックス プログラム、画像処理方法及び画像処理システム
CN112076473B (zh) * 2020-09-11 2022-07-01 腾讯科技(深圳)有限公司 虚拟道具的控制方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171335A1 (en) * 2014-12-16 2016-06-16 3Ditize Sl 3d rotational presentation generated from 2d static images
CN106178504A (zh) * 2016-06-27 2016-12-07 网易(杭州)网络有限公司 虚拟对象运动控制方法及装置
CN108245893A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 三维虚拟环境中虚拟对象的姿态确定方法、装置及介质
CN108245887A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、电子装置及存储介质
WO2020029556A1 (zh) * 2018-08-09 2020-02-13 北京微播视界科技有限公司 自适应平面的方法、装置和计算机可读存储介质
CN111026277A (zh) * 2019-12-26 2020-04-17 深圳市商汤科技有限公司 一种交互控制方法及装置、电子设备和存储介质
CN113926187A (zh) * 2021-10-20 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景中的对象控制方法、装置及终端设备
CN114053693A (zh) * 2021-10-20 2022-02-18 腾讯科技(深圳)有限公司 虚拟场景中的对象控制方法、装置及终端设备

Also Published As

Publication number Publication date
CN114053693B (zh) 2023-07-25
US20230310989A1 (en) 2023-10-05
JP2024521690A (ja) 2024-06-04
CN114053693A (zh) 2022-02-18
CN113926187A (zh) 2022-01-14

Similar Documents

Publication Publication Date Title
US20210252398A1 (en) Method and system for directing user attention to a location based game play companion application
JP6281495B2 (ja) 情報処理装置、端末装置、情報処理方法及びプログラム
EP2953099B1 (en) Information processing device, terminal device, information processing method, and programme
US10712900B2 (en) VR comfort zones used to inform an In-VR GUI editor
JP2014149712A (ja) 情報処理装置、端末装置、情報処理方法及びプログラム
JP7339318B2 (ja) ゲーム内位置ベースのゲームプレイコンパニオンアプリケーション
JP7503122B2 (ja) 位置に基づくゲームプレイコンパニオンアプリケーションへユーザの注目を向ける方法及びシステム
WO2022105362A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及计算机程序产品
CN112416196B (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
JP7391448B2 (ja) 仮想オブジェクトの制御方法、装置、機器、記憶媒体及びコンピュータプログラム製品
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
CN112933606A (zh) 游戏场景转换方法及装置、存储介质、计算机设备
JP3700857B2 (ja) ゲームプログラム及びゲーム装置
WO2023065949A1 (zh) 虚拟场景中的对象控制方法、装置、终端设备、计算机可读存储介质、计算机程序产品
US20230330525A1 (en) Motion processing method and apparatus in virtual scene, device, storage medium, and program product
WO2022156629A1 (zh) 虚拟对象的控制方法、装置、电子设备、存储介质及计算机程序产品
JP5479503B2 (ja) プログラム、情報記憶媒体及び画像生成装置
WO2024037142A1 (zh) 虚拟对象的移动引导方法、装置、电子设备、存储介质及程序产品
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
CN117122910A (zh) 用于将真实世界声音添加到虚拟现实场景的方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882560

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 11202306748S

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 2023571741

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE