WO2022142626A1 - 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品 - Google Patents

虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2022142626A1
WO2022142626A1 PCT/CN2021/125374 CN2021125374W WO2022142626A1 WO 2022142626 A1 WO2022142626 A1 WO 2022142626A1 CN 2021125374 W CN2021125374 W CN 2021125374W WO 2022142626 A1 WO2022142626 A1 WO 2022142626A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual scene
button
touch
buttons
area
Prior art date
Application number
PCT/CN2021/125374
Other languages
English (en)
French (fr)
Inventor
邹聃成
吴胜宇
田聪
仇蒙
何晶晶
刘博艺
崔维健
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227031607A priority Critical patent/KR20220130257A/ko
Priority to JP2022556518A priority patent/JP7447299B2/ja
Priority to US17/856,449 priority patent/US11995311B2/en
Publication of WO2022142626A1 publication Critical patent/WO2022142626A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04804Transparency, e.g. transparent or translucent windows

Definitions

  • the present application relates to computer human-computer interaction technology, and in particular, to an adaptive display method, apparatus, electronic device, computer-readable storage medium and computer program product for a virtual scene.
  • Display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the display technology for virtual scenes, which can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements. It has various typical application scenarios, for example, in the simulation of military exercises and virtual scenarios of games, etc., it can simulate the real battle process between virtual objects.
  • buttons in virtual scenes are widely used, such as buttons with attack function, joystick buttons with operation of virtual object movement, etc., and buttons in virtual scenes are used by clicking, pressing, sliding and other operations to realize corresponding functions.
  • Embodiments of the present application provide an adaptive display method, device, electronic device, computer-readable storage medium and computer program product for a virtual scene, which can automatically adjust the size of buttons in the virtual scene and improve the efficiency of human-computer interaction in the virtual scene .
  • An embodiment of the present application provides a method for adapting and displaying a virtual scene, which is executed by an electronic device, and the method includes:
  • the virtual scene is updated and displayed, wherein the size of the buttons included in the updated virtual scene is adapted to the touch area corresponding to the touch operation.
  • An embodiment of the present application provides an adaptive display device for a virtual scene, including:
  • a display module configured to display a virtual scene and a plurality of buttons of different sizes
  • a processing module configured to obtain a touch area corresponding to the touch operation in response to a touch operation on the plurality of buttons of different sizes
  • the updating module is configured to update and display the virtual scene, wherein the size of the buttons included in the updated virtual scene is adapted to the touch area that can be realized by the touch control body.
  • An embodiment of the present application provides an electronic device for adaptive display, and the electronic device includes:
  • the processor is configured to implement the adaptive display method of the virtual scene provided by the embodiment of the present application by executing the executable instructions stored in the memory.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions for causing a processor to execute an adaptive display method for a virtual scene provided by the embodiments of the present application.
  • Embodiments of the present application provide a computer program product, including computer programs or instructions, which, when executed by a processor, implement the method for adapting and displaying a virtual scene provided by the embodiments of the present application.
  • buttons of different sizes in the virtual scene Through a plurality of buttons of different sizes in the virtual scene, the touch area corresponding to the touch operation is detected, and the size of the buttons included in the virtual scene is adjusted to match the touch area corresponding to the touch operation, so as to achieve high efficiency
  • the human-computer interaction operation adjusts the size of the button, improves the efficiency of human-computer interaction in the virtual scene, and at the same time, the resource consumption of graphics processing hardware for related computing of human-computer interaction can be significantly saved.
  • FIGS. 1A-1D are interface schematic diagrams of button size adjustment provided by the related art
  • FIG. 2A is a schematic diagram of an application mode of a method for adaptive display of a virtual scene provided by an embodiment of the present application
  • FIG. 2B is a schematic diagram of an application mode of a method for adapting and displaying a virtual scene provided by an embodiment of the present application;
  • 3A is a schematic structural diagram of an electronic device for adaptive display provided by an embodiment of the present application.
  • 3B is a schematic diagram of the principle of a human-computer interaction engine installed in a display device for adapting a virtual scene provided by an embodiment of the present application;
  • 4A-4C are schematic flowcharts of a method for adapting and displaying a virtual scene provided by an embodiment of the present application
  • FIG. 5 is a schematic interface diagram of button size detection provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of a button adaptation detection portal provided by an embodiment of the present application.
  • buttons size detection provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface for detecting the maximum button size provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a confirmation interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a discrete scaling value selection interface provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a continuous scaling value selection interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of an adaptive adjustment game UI interface provided by an embodiment of the present application.
  • FIGS. 13A-13E are schematic interface diagrams of button size adjustment provided by an embodiment of the present application.
  • first ⁇ second involved is only to distinguish similar objects, and does not represent a specific ordering of objects. It is understood that “first ⁇ second” can be used when permitted.
  • the specific order or sequence is interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
  • Virtual scene Using the scene output by the device that is different from the real world, the visual perception of the virtual scene can be formed through the assistance of the naked eye or the device, such as the two-dimensional image output by the display screen, through stereo projection, virtual reality and augmented reality. 3D images output by stereoscopic display technologies such as technology; in addition, various real-world perceptions such as auditory perception, tactile perception, olfactory perception and motion perception can also be formed through various possible hardware.
  • one or more of the executed operations may be real-time, or may have a set delay; Unless otherwise specified, there is no restriction on the order of execution of multiple operations to be executed.
  • Client an application running in the terminal for providing various services, such as game client, military exercise simulation client.
  • Virtual objects the images of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., for example, characters, animals, plants, oil barrels, walls, stones, etc. displayed in the virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object may be a user character controlled by an operation on the client, or an artificial intelligence (AI, Artificial Intelligence) set in the virtual scene battle through training, or it may be set in the virtual scene interaction Non-user characters (NPC, Non-Player Character).
  • AI Artificial Intelligence
  • NPC Non-Player Character
  • the virtual object may be a virtual character performing adversarial interactions in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene may be preset or dynamically determined according to the number of clients participating in the interaction.
  • users can control virtual objects to fall freely in the sky of the virtual scene, glide, or open a parachute to fall, and run, jump, crawl, bend forward, etc. on the land.
  • Virtual objects swim, float or dive in the ocean.
  • users can also control virtual objects to move in the virtual scene on a virtual vehicle.
  • the virtual vehicle can be a virtual car, a virtual aircraft, a virtual yacht, etc. , only the above scenario is used as an example for illustration, which is not specifically limited in this embodiment of the present application.
  • Users can also control virtual objects to interact with other virtual objects confrontationally through virtual props.
  • the virtual props can be throwing virtual props such as grenades, cluster mines, sticky grenades, etc., or shooting types such as machine guns, pistols, and rifles.
  • virtual props this application does not specifically limit the types of virtual props.
  • Scene data represent various characteristics of the objects in the virtual scene during the interaction process, for example, may include the position of the objects in the virtual scene.
  • scene data may include the waiting time for various functions configured in the virtual scene (depending on the ability to use the same Function times), and can also represent attribute values of various states of the game character, such as life value (also called red amount) and magic value (also called blue amount), etc.
  • Asian players have slender fingers, and the button size of the entire user interface (UI, User Interface) interface will also be small, while European and American players are naturally tall. , the fingers are relatively large, if the small button is still used, it will cause a series of problems such as poor operating experience, clicking not as expected, and easy to touch by mistake.
  • UI User Interface
  • buttons size adjustment is as follows: when the player is in the battle interface, 1) as shown in FIG. 1A , click the “setting” button 102 on the left side of the minimap 101 to enter the setting page as shown in FIG. 1B ; 2) As shown in Figure 1B, click the operation setting tab 103 in the list to enter the interface adjustment interface as shown in Figure 1C; 3) As shown in Figure 1C, click the "custom layout" button 104 to enter the custom adjustment interface; 4) Click a button that needs to be adjusted to enter the custom adjustment interface as shown in Figure 1D; 5) As shown in Figure 1D, use the slider 105 in the interface to adjust the size; 6) After completing the adjustment , click the Save button to complete the button size setting.
  • the operation level of button size adjustment is relatively deep, and for a novice or casual player, a high understanding cost is required, and it is difficult to learn and adjust the most suitable button size by oneself. Even if the player understands the adjustment method, the player still needs to make a complex adjustment for a single button, which will result in a long time to adjust the button size, resulting in a poor user experience.
  • the embodiments of the present application provide an adaptive display method, device, electronic device, computer-readable storage medium, and computer program product for a virtual scene, which can automatically adjust the size of buttons in the virtual scene and improve the performance of the virtual scene.
  • the efficiency of human-computer interaction is a simple and inexpensive method for adjusting the size of buttons in the virtual scene and improve the performance of the virtual scene.
  • the electronic devices provided by the embodiments of the present application may be implemented as notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (for example, mobile phones, portable music players, Various types of user terminals such as personal digital assistants, dedicated messaging devices, portable game devices) can also be implemented as servers.
  • mobile devices for example, mobile phones, portable music players,
  • user terminals such as personal digital assistants, dedicated messaging devices, portable game devices
  • server exemplary applications when the device is implemented as a terminal will be described.
  • the virtual scene may be completely based on terminal output, or based on The terminal and the server cooperate to output.
  • the virtual scene may be a picture presented in a military exercise simulation.
  • the user can simulate a battle situation, strategy or tactics through virtual objects belonging to different teams, which has a great impact on the command of military operations. guiding role.
  • the virtual scene may be an environment for game characters to interact, for example, it may be for game characters to play against each other in the virtual scene.
  • the two sides can interact in the virtual scene, so that the user can play in the virtual scene. Relieve the stress of life during the game.
  • FIG. 2A is a schematic diagram of an application mode of the virtual scene adaptation display method provided by the embodiment of the present application, which is suitable for some data related to the virtual scene 100 that can be completed completely relying on the computing power of the terminal 400
  • the output of virtual scenes is completed through terminals 400 such as smart phones, tablet computers, and virtual reality/augmented reality devices.
  • the terminal 400 calculates the data required for display through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs a video frame capable of forming a visual perception of the virtual scene on the graphics output hardware,
  • a video frame capable of forming a visual perception of the virtual scene on the graphics output hardware
  • two-dimensional video frames can be presented on the display screen of a smartphone, or three-dimensional video frames can be projected on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the device can also use different hardware to form one or more of auditory perception, tactile perception, motor perception and taste perception.
  • the terminal 400 runs the client 410 (eg, a stand-alone game application), and outputs a virtual scene including role-playing during the running of the client 410.
  • the virtual scene is an environment for game characters to interact, such as for Plains, streets, valleys, etc.
  • the virtual scene includes a first virtual object 110 and a virtual prop 120, and the first virtual object 110 may be a game character controlled by a user (or a player), that is, the first virtual object 110
  • the virtual object 110 is controlled by the real user, and will operate in the virtual scene in response to the real user's manipulation of buttons (including rocker buttons, attack buttons, defense buttons, etc.), for example, when the real user moves the rocker button to the left,
  • the first virtual object will move to the left in the virtual scene, and can also remain stationary, jump and use various functions (such as skills and props);
  • the virtual prop 120 may be used by the first virtual object 110 in the virtual scene
  • the first virtual object 110 can pick up the virtual props 120 in the virtual scene by moving the joystick button, so as to use the functions of the virtual props 120 to perform game battles.
  • the user performs touch operations on a plurality of buttons of different sizes displayed on the client 410 to determine the touch area corresponding to the touch operation, and adjust the touch area included in the virtual scene based on the touch area corresponding to the touch operation.
  • the size of the button (such as the rocker button 130), so that the size of the adjusted button is adapted to the touch area corresponding to the touch operation, so as to perform subsequent human-computer interaction based on the adjusted button, for example, the user adjusts the
  • the rear control joystick button 130 controls the first virtual object 110 to move to the virtual prop 120 in the virtual scene to pick up the virtual prop 120 in the virtual scene, so that the user does not need to manually adjust the size of each button in the virtual scene one by one, Adjust the size of buttons with efficient human-computer interaction operation to improve the efficiency of human-computer interaction in virtual scenes.
  • FIG. 2B is a schematic diagram of an application mode of the method for adapting and displaying a virtual scene provided by an embodiment of the present application, which is applied to the terminal 400 and the server 200 , and is suitable for relying on the computing capability of the server 200 to complete virtual
  • the scene is calculated and the application mode of the virtual scene is output on the terminal 400 .
  • the server 200 calculates the relevant display data of the virtual scene and sends it to the terminal 400.
  • the terminal 400 relies on the graphics computing hardware to complete the loading, parsing and rendering of the calculation display data, and depends on the graphics output hardware.
  • Output virtual scenes to form visual perception for example, two-dimensional video frames can be presented on the display screen of a smartphone, or video frames can be projected on the lenses of augmented reality/virtual reality glasses to achieve a three-dimensional display effect; for virtual scenes in the form of
  • the corresponding hardware output of the terminal can be used, for example, an auditory perception can be formed using a microphone output, a tactile perception can be formed using a vibrator output, and so on.
  • the terminal 400 runs the client 410 (eg, a game application of the online version), and interacts with other users by connecting to the game server (ie, the server 200 ), and the terminal 400 outputs the virtual scene 100 of the client 410 , which includes the first virtual scene
  • the object 110 and the virtual prop 120, the first virtual object 110 may be a game character controlled by the user, that is, the first virtual object 110 is controlled by a real user, and will respond to the real user for buttons (such as rocker buttons, attack buttons, defense button, etc.) to operate in the virtual scene, for example, when the real user moves the joystick to the left, the first virtual object will move to the left in the virtual scene, and can also remain stationary, jump and use various functions (such as skills and props);
  • the virtual prop 120 can be a battle tool used by the first virtual object 110 in the virtual scene, for example, the first virtual object 110 can pick up the virtual prop 120 in the virtual scene by moving the joystick button, thereby The game battle is performed using the function of the virtual item 120
  • the client 410 sends the user's touch operation to the server 200 through the network 300, and the server 200 determines the user's touch operation according to the user's touch operation.
  • the touch operation corresponds to the touch area, and based on the touch area corresponding to the touch operation, the size of the button (eg, the joystick button 130 ) included in the virtual scene is adjusted, so that the size of the adjusted button corresponds to the touch operation.
  • the touch area is adapted, and the adjusted button is sent to the client 410.
  • the client 410 presents the adjusted button (for example, the joystick button 130), so as to perform the operation based on the adjusted button.
  • Subsequent human-computer interaction for example, when the user controls the first virtual object 110 to move to the virtual prop 120 in the virtual scene through the adjusted control joystick button 130, so as to pick up the virtual prop 120 in the virtual scene, so that the virtual prop 120 in the virtual scene can be picked up efficiently.
  • the human-computer interaction operation adjusts the size of the button to improve the efficiency of human-computer interaction in the virtual scene.
  • the terminal 400 may implement the method for adapting and displaying the virtual scene provided by the embodiments of the present application by running a computer program.
  • the computer program may be a native program or software module in an operating system; ) application program (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a game APP (that is, the above-mentioned client 410); it can also be a small program, that is, it only needs to be downloaded into the browser environment.
  • a running program it can also be a game applet that can be embedded into any APP.
  • the above-mentioned computer programs may be any form of application, module or plug-in.
  • Cloud technology refers to a kind of hosting that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data computing, storage, processing, and sharing. technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. Cloud computing technology will become an important support. Background services of technical network systems require a lot of computing and storage resources.
  • the server 200 may be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communication, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
  • FIG. 3A is a schematic structural diagram of an electronic device for adaptive display provided by an embodiment of the present application.
  • the electronic device is used as an example for illustration.
  • the electronic device shown in FIG. 3A includes: at least one processor 410, Memory 450 , at least one network interface 420 and user interface 430 .
  • the various components in electronic device 400 are coupled together by bus system 440 .
  • the bus system 440 is used to implement the connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 3A.
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., where a general-purpose processor may be a microprocessor or any conventional processor or the like.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers and/or one or more visual display screens.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 450 includes, for example, one or more storage devices that are physically remote from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 450 described in the embodiments of the present application is intended to include any suitable type of memory.
  • memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • the operating system 451 includes system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • a presentation module 453 for enabling presentation of information (eg, a user interface for operating peripherals and displaying content and information) via one or more output devices 431 (eg, a display screen, speakers, etc.) associated with the user interface 430 );
  • An input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
  • the display device for adapting the virtual scene may be implemented in software.
  • FIG. 2A shows the device for adapting the display 455 for the virtual scene stored in the memory 450, which may be a program and Software in the form of plug-ins, including the following software modules: display module 4551, processing module 4552, update module 4553 and setting module 4554, these modules are logical, so any combination or further splitting can be carried out according to the realized functions, The function of each module will be explained below.
  • FIG. 3B is a schematic diagram of the principle of a human-computer interaction engine installed in a display device for adapting a virtual scene provided by an embodiment of the present application. Taking the application to a game as an example, it can also be called a game engine.
  • a game engine refers to some A core component of a written editable computer game system or some interactive real-time graphics application, these systems provide game designers with various tools needed to write games, the purpose is to make game designers easy and fast Create a game program without starting from scratch
  • the game engine includes: rendering engine (ie "renderer", including 2D graphics engine and 3D graphics engine), physics engine, obstacle detection system, sound effects, scripting engine, computer animation , artificial intelligence, network engine and scene management
  • a game engine is a set of codes (instructions) that can be recognized by a machine designed for a machine running a certain type of game, it is like an engine, controlling the operation of the game, a game program can It is divided into two parts: game engine and game resources.
  • the method for adapting and displaying a virtual scene is implemented by each module in the device for adapting and displaying a virtual scene shown in FIG. 3A by calling the relevant components of the human-computer interaction engine shown in FIG. 3B . , exemplified below.
  • the display module 4551 is used to display a virtual scene and a plurality of buttons of different sizes.
  • the display module 4551 invokes the user interface part in the game engine shown in FIG. 3B to realize the interaction between the user and the game, by calling the model part in the game engine Make a two-dimensional or three-dimensional model, and after the model is made, assign material maps to the model according to different faces through the skeletal animation part, which is equivalent to covering the bones with skin, and finally through the rendering part, the model, animation, light and shadow, special effects All the effects are calculated in real time and displayed on the human-computer interface.
  • the processing module 4552 is configured to obtain the touch area corresponding to the touch operation in response to touch operations on buttons of different sizes, and call the update module 4553 to update and display the virtual scene.
  • the button is rendered and displayed on the human-computer interaction interface, so that the size of the button included in the updated virtual scene on the human-computer interaction interface is adapted to the touch area corresponding to the touch operation.
  • the setting module 4553 is used to display the button adaptation detection entry in the virtual scene; in response to the triggering operation for the button adaptation detection entry, the setting module 4555 invokes the rendering module in the game engine shown in FIG.
  • the adaptation detection entrance is rendered and displayed on the human-computer interface.
  • FIG. 4A is a schematic flowchart of a method for adapting and displaying a virtual scene provided by an embodiment of the present application, which is described in conjunction with the steps shown in FIG. 4A .
  • buttons of different sizes may be function buttons associated with interactive functions in the virtual scene (such as selecting a character, controlling a character), such as a joystick button for controlling the movement of a virtual object, controlling a virtual object to attack other virtual objects
  • the electronic device responds to the touch operation on the function button, and learns the touch area that the touch subject can achieve (that is, the touch operation The corresponding touch area), and then based on the touch area that can be achieved by the touch subject, the virtual scene is updated automatically or after the user confirms that it needs to be updated, so that the size of the buttons included in the updated virtual scene is the same as the touch that the user can achieve. area to match.
  • the multiple buttons of different sizes may also be buttons dedicated to detecting the touch area that are not related to the interactive functions in the virtual scene (for example, selecting a character, controlling a character), such as a button dedicated to detecting the touch area.
  • Detect buttons, etc. so that when the user operates multiple detection buttons of different sizes, the electronic device responds to the touch operation on the detection button to obtain the touch area corresponding to the touch operation, and then based on the touch operation corresponding to the touch operation.
  • the area is updated automatically or after the user confirms that it needs to be updated, so that the size of the function buttons included in the updated virtual scene is adapted to the touch area corresponding to the touch operation.
  • the touch subject refers to an object that can be touched.
  • the touch subject is a real user.
  • the user touches the button displayed on the electronic device with his finger, so that the sensor of the electronic device detects when the finger touches the button.
  • the touch subject is a glove with touch function, and the user touches the button displayed on the electronic device through the glove, so as to detect the touch area of the glove when the button is touched by the sensor of the electronic device.
  • the touch area that can be achieved by the touch body (that is, the touch area corresponding to the touch operation) is detected by the sensor corresponding to the display screen on the electronic device, where the sensor includes multiple sensing units.
  • the control body touches multiple buttons of different sizes, the touch body will trigger multiple touch units in the sensor, and the electronic device converts the touch area corresponding to the touch operation based on the number of triggered sensing units. , so that in response to the touch operation on the button, the touch area corresponding to the touch operation is calculated.
  • step 101 a virtual scene and a plurality of buttons of different sizes are displayed.
  • buttons of different sizes are associated with interactive functions (eg, selecting a character, controlling a character) in the virtual scene, wherein the multiple buttons of different sizes may be buttons with the same function or different functions.
  • the multiple buttons of different sizes may be buttons with the same function or different functions.
  • the touch subject can obtain the touch area of the button, and there is no need to generate additional buttons unrelated to the interactive function in the virtual scene to detect
  • the touch subject is aimed at the touch area that the button can achieve, and saves the resource consumption of related computing of human-computer interaction.
  • buttons of different sizes are not related to interactive functions in the virtual scene (such as selecting a character, controlling a character), such as a prototype button dedicated to detecting the touch area, etc.
  • the virtual scene is displayed, and an adaptation detection area independent of the virtual scene is displayed.
  • the adaptation detection area can be displayed on a split screen or a floating layer, wherein the adaptation detection area includes the same as the virtual scene.
  • buttons of different sizes that are not related to the interactive function in FIG. 5. For example, as shown in FIG. 5, the adaptation detection area 501 in FIG. 5 is displayed through a floating layer, which is distinguished from the virtual scene 500.
  • Pixel button 502 120px button 503, 75px button 504, and 45px button 505, and subsequently in response to touch operations for buttons of different sizes in the adaptation detection area, pause or continue the interaction process in the virtual scene , and obtain the touch area that the touch subject can achieve.
  • the virtual scene in the process of detecting the reachable area that can be achieved by the touch subject by adapting the detection area, the virtual scene can be paused at any time, or can be continuously performed.
  • the button adaptation detection entry is displayed in the virtual scene; in response to the triggering operation for the button adaptation detection entry, it is determined that the display independent of the adaptation detection area of the virtual scene will be executed. Operation with detection area.
  • the button adaptation detection entry 601 is displayed in the virtual scene.
  • the display as shown in the figure will be displayed.
  • 5 shows the adaptation detection area 501 independent of the virtual scene.
  • the embodiment of the present application does not limit the trigger operation.
  • the trigger operation may be a click operation, or may be a touch operation such as a long press operation.
  • the fitting detection area 501 is shown in FIG. 5 .
  • step 102 in response to touch operations on multiple buttons of different sizes, a touch area corresponding to the touch operation is acquired.
  • the types of touch areas that can be achieved by the touch body include a minimum touch area and a maximum touch area. After obtaining the minimum touch area and the maximum touch area that can be achieved by the touch subject, the minimum touch area and the maximum touch area are displayed in the virtual scene.
  • the touch subject can subsequently adjust the touch area included in the virtual scene based on the reachable area of the touch subject.
  • the size of the button is adjusted, so that the buttons included in the virtual scene do not need to be manually adjusted, the cumbersome manual operation is reduced, the size of the button in the virtual scene is automatically adjusted, and the efficiency of human-computer interaction in the virtual scene is improved.
  • FIG. 4B is an optional schematic flowchart of the method for adapting and displaying a virtual scene provided by an embodiment of the present application.
  • FIG. 4B shows that FIG. 4A further includes step 104, and step 102 can be implemented by step 1021, wherein,
  • the type of touch area that can be achieved by the touch body includes the smallest touch area: in step 104 , according to the order of the sizes of buttons of different sizes from large to small, a plurality of different sizes are displayed in sequence in the adaptation detection area
  • step 1021 in response to the touch operations on a plurality of buttons of different sizes in sequence, obtain the minimum touch area corresponding to the touch operation.
  • a 220px button 502 , a 120px button 503 , a 75px button 504 , and a 45px button 505 are sequentially displayed in the adaptation detection area 501 from left to right, namely button 502 , button 503 , button 504 and button 505 is sorted according to the size of the buttons from large to small, and the touch subject clicks button 502, button 503, button 504 and button 505 from left to right, so that in the process of clicking button 502, button 503, button 504 and button 505 , to obtain the minimum touch area that the touch subject can achieve.
  • acquiring the minimum touch area corresponding to the touch operation includes: performing the following processing for any button of any size among the plurality of buttons of different sizes: when the number of times the button of any size is touched by mistake is greater than the number of false touches
  • the threshold value is touched, the size that satisfies the preset condition among the multiple sizes is determined as the minimum touch area corresponding to the touch operation; wherein the preset condition includes: adjacent to any size and larger than any size.
  • the size adjacent to and larger than the size is determined as the touch The minimum touch area that the main body can achieve. For example, if the button 505 is mistakenly touched twice in a row, it is determined that the number of times the button with a size of 45px is mistakenly touched is greater than the false touch threshold (for example, the false touch threshold is set to 1), then the same The size of the 45px adjacent to and larger than the 45px (ie, 75px) serves as the minimum touch area that the touch body can achieve.
  • buttons of the same size there are multiple buttons of the same size.
  • a 45px button includes button 702, button 703, and button 704.
  • the size adjacent to and larger than the size is determined as the minimum touch area that can be achieved by the touch subject.
  • the button 702 and the button 704 are falsely touched during the detection process, then If the number of times the button with a size of 45px is touched by mistake is greater than the false touch threshold (for example, the false touch threshold is set to 1), the size adjacent to and larger than the 45px (ie 75px) is used as the minimum touch body that can be achieved. reach area.
  • the type of the touch area includes the maximum touch area; acquiring the touch area corresponding to the touch operation includes: acquiring the fingerprint applied to the button, and using the area of the fingerprint as the maximum touch area corresponding to the touch operation area.
  • the fingerprint 802 applied by the touch subject on the button 801 is obtained, the center of the fingerprint 802 is the center 803 , and the maximum distance 804 between the center 803 and the edge of the fingerprint is taken as the radius r1 of the fingerprint , the area of the circle with the radius r1 (that is, the area of the fingerprint) is taken as the maximum touch area that can be achieved by the touch subject, or the minimum distance 805 between the center 803 and the edge of the fingerprint is taken as the radius r2 of the fingerprint, and the radius is r2
  • the area of the circle (that is, the area of the fingerprint) is the maximum touch area that the touch subject can achieve.
  • multiple fingerprints corresponding to when the touch subject presses the button multiple times are obtained, and the fingerprint with the largest area among the multiple fingerprints is used as the fingerprint of the touch subject applied to the button.
  • acquiring the fingerprint applied to the button includes: acquiring multiple fingerprints corresponding to pressing the button multiple times; performing the following processing for any one of the multiple fingerprints: when the area of the fingerprint remains unchanged during the pressing process. When the time exceeds the time threshold, the fingerprint is used as the fingerprint applied to the button.
  • the multiple fingerprints obtained when the touch subject presses the button multiple times are unstable, and the fingerprints may not be obtained when the touch subject is comfortable, that is, the situation
  • the maximum touch area obtained below does not conform to the usage habits of the touch subject, and it is easy to cause false touches.
  • the fingerprint is applied to the fingerprint of the button as the touch subject, thereby A stable fingerprint is obtained, that is, a fingerprint obtained when the touch subject is comfortable, and the maximum touch area obtained in this case conforms to the use habit of the touch subject, thereby improving the user experience.
  • step 103 the virtual scene is updated and displayed, wherein the size of the buttons included in the updated virtual scene is adapted to the touch area corresponding to the touch operation.
  • the size of the buttons included in the virtual scene is adjusted based on the touch area that can be achieved by the touch subject, so that there is no need to manually adjust the touch area included in the virtual scene. button, reduce the tedious manual operation, realize the automatic adjustment of the size of the button in the virtual scene, and improve the efficiency of human-computer interaction in the virtual scene.
  • the types of touch areas that can be achieved by the touch body include a minimum touch area and a maximum touch area.
  • a confirmation interface can be displayed in the virtual scene.
  • the confirmation interface can be an interface independent of the virtual scene.
  • the confirmation interface includes a minimum reach area 901 and a maximum reach area 901 . Touching the area 902, after the touch subject clicks the "OK and Generate" button 903, in response to the trigger operation of the touch subject on the button 903, the size of the buttons included in the virtual scene is adjusted so that the size of the adjusted button corresponds to the size of the button.
  • the area is between the minimum reach area and the maximum reach area, so that there is no need to manually adjust the buttons included in the virtual scene one by one.
  • the frequency of the button included in the virtual scene being accidentally touched by the touch subject is obtained, and when the frequency is greater than the frequency threshold, it is determined that the operation of adjusting the size of the button will be performed.
  • buttons included in the virtual scene need to be adjusted. Only when the buttons are frequently touched by mistake during the running of the virtual scene, the size of the buttons can be adjusted, that is, the buttons included in the virtual scene are regularly obtained by touching the subject.
  • the frequency of false touches when the frequency is greater than the frequency threshold, it is determined that the button is frequently touched by mistake, and it is determined that the button needs to perform an operation of adjusting the size of the button.
  • FIG. 4C is an optional schematic flowchart of the method for adapting and displaying a virtual scene provided by an embodiment of the present application.
  • FIG. 4C shows that step 103 in FIG. 4A can be implemented through steps 1031 to 1033,
  • the type of reach area includes the minimum reach area: in step 1031, obtain the first button and the second button included in the virtual scene; in step 1032, increase the size of the first button, so that the enlarged first button The area corresponding to the size of the button is located between the minimum touch area and the maximum touch area; in step 1033, the size of the second button is reduced so that the area corresponding to the reduced size of the second button is located at the minimum touch area area and the maximum reach area.
  • the area corresponding to the size of the first button is smaller than the minimum reach area, and the area corresponding to the size of the second button is greater than the maximum reach area.
  • the size of the first button included in the virtual scene is smaller than the minimum touch area
  • adjust the size of the first button so that the adjusted size of the first button is the minimum touch area
  • adjust the size of the second button so that the adjusted size of the second button is the maximum reach area
  • adjusting the size of the buttons included in the virtual scene includes: obtaining a zoom ratio for the buttons included in the virtual scene, wherein the zoom ratio is used to adjust the size of all buttons included in the virtual scene, so that the adjustment The area corresponding to the latter size is located between the minimum touch area and the maximum touch area; the size of the buttons included in the virtual scene is adjusted according to the scaling ratio.
  • the minimum touch area and the maximum touch area determine the original sizes of all buttons in the virtual scene, and determine and adjust the sizes of all buttons included in the virtual scene through calculation, so that the adjusted sizes correspond to The scale of the area between the minimum reach area and the maximum reach area, and adjust the size of the buttons included in the virtual scene according to the scaling ratio, so that the area corresponding to the adjusted size in the virtual scene is located between the minimum reach area and the maximum reach area. It can automatically adjust the size of all buttons in the virtual scene and improve the efficiency of human-computer interaction in the virtual scene.
  • adjusting the size of the button included in the virtual scene according to the zoom ratio includes: displaying a zoom ratio selection area, wherein the zoom ratio selection area includes a plurality of zoom ratios value; in response to a selection operation for a plurality of zoom ratio values included in the zoom ratio selection area, the size of the button included in the virtual scene is adjusted according to the selected zoom ratio value.
  • the embodiment of the present application does not limit the selection operation.
  • the selection operation may be a click operation, a long press operation, or the like.
  • a scaling selection area 1001 can be displayed in the virtual scene, and the scaling selection area 1001 can be an interface independent of the virtual scene
  • the selection interface 1001 includes multiple scaling ratios, such as 0.5, 0.7, and 1.2, and the touch subject can select one of 0.5, 0.7, and 1.2, so as to respond to the triggering operation of the touch subject for the scaling ratio, adjust according to the selected scaling ratio
  • the size of the buttons included in the virtual scene is such that the area corresponding to the adjusted size of the button is located between the minimum reach area and the maximum reach area, so that there is no need to manually adjust the buttons included in the virtual scene one by one.
  • a zoom ratio selection area 1101 can be displayed in the virtual scene.
  • the zoom ratio selection area 1101 can be an interface independent of the virtual scene.
  • the scale selection area 1101 includes an adjustable slider 1102 for continuous scaling values, and the touch subject selects a scaling ratio from 50% to 120% by sliding the slider 1102, so that in response to the sliding operation for the adjustable slider 1102,
  • the selected ratio in the adjustable slider such as 75%, is used as the zoom ratio for the buttons included in the virtual scene, and the size of the buttons included in the virtual scene is adjusted according to the selected zoom ratio, so that the size of the adjusted button is
  • the corresponding area is located between the minimum reach area and the maximum reach area, so that there is no need to manually adjust the buttons included in the virtual scene one by one.
  • Asian players have slender fingers, and the button size of the entire user interface (UI, User Interface) interface will also be small, while European and American players are naturally tall. , the fingers are relatively large, if the small button is still used, it will cause a series of problems such as poor operating experience, clicking not as expected, and easy to touch by mistake.
  • UI User Interface
  • the operation level of button size adjustment is relatively deep, and for a novice or casual player, a high understanding cost is required, and it is difficult to learn and adjust the most suitable button size by oneself. Even if the player understands the adjustment method, the player still needs to make a complex adjustment for a single button, which will result in a long time to adjust the button size, resulting in a poor user experience.
  • an embodiment of the present application proposes a method for adapting and displaying a virtual scene.
  • the process of adjusting the size of the operation button by the player is automated, which reduces the understanding cost of the player to a certain extent.
  • the speed of adjustment is increased, thus improving the user experience.
  • the method for adapting and displaying a virtual scene proposed in the embodiment of the present application includes two steps, namely, detecting interaction and generating interaction.
  • the detection interaction is used to detect the biological characteristics of different players, and the corresponding UI interface is generated by generating the interaction.
  • the UI interface is more suitable for the player's fingers, reducing false touches and improving the game experience.
  • the way of detecting interaction is as follows: when the player enters a customizable UI interface, such as the battle interface in the game, an interface for detecting the size of the player's finger will automatically pop up and display, and the interface contains a series of buttons from large to small. , the player can record finger data by clicking the button; when the player cannot comfortably click a button of a certain size, the player's smallest accessible button (ie, the smallest accessible area) can be obtained. Therefore, all in the game The UI button should not be smaller than this button, so that the player cannot reach it.
  • the second step is entered, which is used to detect the maximum reachable area of the player's commonly used fingers (when the frequency of use of a finger is greater than the threshold, the finger is the commonly used finger), For example, the left and right thumbs commonly used in shooting games, the player can record the maximum reachable area of the player by pressing and holding the circular pattern in the center of the screen until it can no longer reach a larger area.
  • the method of generating interaction is as follows: according to the minimum reachable area and the maximum reachable area of the player, a reachable area interval is calculated, and the touchable button area in the UI interface cannot exceed this interval. If the touchable button area does not conform to this interval, the touchable button area can be automatically adjusted according to the interval to complete the adjustment of all button sizes in the full game UI interface.
  • Step 1 As shown in FIG. 13A, a battle interface is displayed, which is an adjustable UI interface.
  • Step 2 Click the setting button 1301 in FIG. 13A to automatically display the pop-up window for detection as shown in FIG. 13B, and enter the minimum button size detection.
  • Step 3 In the process of detecting the minimum button size, the player clicks the buttons arranged in descending order in Figure 13B in turn, and triggers the button by identifying the center of the touch hot zone area to determine that the button is clicked successfully. When there is a situation where the button cannot be clicked , the detection is terminated, and the minimum reachable button area is recorded.
  • the click detection method is as follows: determine whether the center point of the graphic area of the contact surface between the finger and the screen is within the button area, and if the center point of the graphic area is within the button area, it is determined that the click is successful; When the center point of the area is outside the button area, it is determined that the click fails and the button function is not triggered.
  • Fig. 13B to perform the minimum button size detection, it is necessary to click the stacked three prototype buttons in order from large to small (using the three-for-a-group stacking button detection method to avoid the error of single button detection), if If the click is successful, the prototype button will be displayed in green. On the contrary, if the click fails, the prototype button will be displayed in red; the player needs to complete the successful click of the three stacked buttons before entering the detection of the next group of smaller buttons. Among them, in the smaller stacked buttons, it is more likely to encounter the situation that the click does not take effect, or the click fails. As shown in Figure 13B, the click fails when the 45px size button is clicked. When the click fails, the subsequent button with a smaller size is not displayed, and the size corresponding to the previous button that is completely clicked successfully is recorded as the minimum button size, for example, 75px is used as the minimum button size.
  • Step 4 After completing the minimum button size detection, click the "Next" button 1302 in FIG. 13B to enter the maximum size area detection.
  • Step 5 As shown in Figure 13C, in the maximum size area detection step, it is necessary to cover the area 1303 in the screen as much as possible until it cannot be covered. The highlighted mark shows the touch area. Click the "Next" button 1304 in FIG. 13C to enter the confirmation interface as shown in FIG. 13D .
  • the touch screen detection method is as follows: using a fixed touchable area as the graphics detection base plate, the player needs to use the commonly used fingers to cover the base plate as much as possible.
  • the screen detects a touch it will be
  • the touched area 1305 is rendered in green to represent the actual reachable area of the player. If no touches are detected in the area, the color of the base plate remains unchanged.
  • the distance r from the edge of the touched graphic to the center of the circle is calculated as the radius. According to the value of r, the area of a circle can be calculated, and the area is recorded as the maximum reach of the player. button size.
  • Step 6 As shown in Figure 13D, the minimum button area and the maximum button area are displayed, and the interface generation can be started by clicking the "Confirm and Generate” button 1306 in Figure 13D.
  • the minimum button area is recorded as the minimum value A (75px as shown in FIG. 13D ), and the minimum button area is recorded as the maximum value B (300*300px as shown in FIG. 13D ).
  • Step 7 the interface is generated, and the process ends.
  • the composition of the battle interface is composed of various buttons of different sizes, including a part of the smaller-sized buttons near the small map in the upper right corner and the largest-sized buttons in the figure, such as the moving joystick and the Attack button.
  • the size of any button does not conform to [A, B], for example, the size of a button is smaller than A, the size of the button will be automatically adjusted to A, until the size of all buttons conforms to [A, B], the interface generation is completed, such as generating In the interface shown in FIG. 13E, for example, the size of the attack button 1307 is automatically adjusted to B.
  • the embodiment of the present application directly obtains the player's biometric data by detecting the size of the button, and can customize a whole set of UI interfaces according to the biometric data, which improves the player's efficiency in adjusting the custom interface and enhances the user experience.
  • the display module 4551 is configured to display a virtual scene and a plurality of buttons of different sizes; the processing module 4552 is configured to obtain the touch area corresponding to the touch operation in response to the touch operation on the buttons of different sizes ; Update module 4553, configured to update and display the virtual scene, wherein the size of the buttons included in the updated virtual scene is adapted to the touch area corresponding to the touch operation.
  • the display module 4551 is further configured to display a plurality of buttons of different sizes in the virtual scene during the running process of the virtual scene, wherein the buttons are all the same as those in the virtual scene.
  • the processing module 4552 is further configured to, in response to a touch operation on a plurality of buttons of different sizes displayed in the virtual scene, obtain the touch area corresponding to the button by the touch operation .
  • the display module 4551 is further configured to display a virtual scene and display an adaptation detection area independent of the virtual scene, wherein the adaptation detection area includes interaction with the virtual scene Multiple buttons of different sizes irrespective of function; the processing module 4552 is further configured to pause or continue the interaction process in the virtual scene in response to a touch operation for multiple buttons of different sizes in the adaptation detection area , and obtain the touch area corresponding to the touch operation.
  • the virtual scene adaptation display device 455 further includes: a setting module 4554 configured to display a button adaptation detection entry in the virtual scene; in response to a triggering of the button adaptation detection entry operation, it is determined that an operation of displaying the adapted detection area independent of the virtual scene is to be performed.
  • the setting module 4554 is further configured to obtain the frequency at which a button included in the virtual scene is touched by mistake; when the frequency is greater than a frequency threshold, it is determined that displaying the button in the virtual scene is to be performed.
  • the button is adapted to detect the operation of the portal.
  • the type of the touch area includes a minimum touch area; the processing module 4552 is further configured to, according to the size of the plurality of buttons of different sizes from large to small, in the adaptation The plurality of buttons of different sizes are displayed in sequence in the detection area; in response to the touch operations on the plurality of buttons of different sizes in sequence, the minimum touch area corresponding to the touch operation is obtained.
  • the processing module 4552 is further configured to perform the following processing for any button of any size among the plurality of buttons of different sizes: when the number of times the button of any size is touched by mistake is greater than the number of false touches
  • the threshold is set, the size that satisfies the preset condition among the multiple sizes is determined as the minimum touch area corresponding to the touch operation; wherein, the preset condition includes: adjacent to any size and larger than all sizes any size mentioned.
  • the type of the touch area includes a maximum touch area; the processing module 4552 is further configured to acquire the fingerprint applied to the button, and use the area of the fingerprint as the corresponding touch operation the maximum reach area.
  • the processing module 4552 is further configured to acquire multiple fingerprints corresponding to pressing the button multiple times, and use the fingerprint with the largest area among the multiple fingerprints as the fingerprint applied to the button.
  • the processing module 4552 is further configured to acquire multiple fingerprints corresponding to pressing the button multiple times; perform the following processing for any one of the multiple fingerprints: when the area of the fingerprint is within When the time that remains unchanged during the pressing process exceeds the time threshold, the fingerprint is used as the fingerprint applied to the button.
  • the update module 4553 is further configured to adjust the size of the button included in the virtual scene, wherein the area corresponding to the adjusted size of the button is located between the minimum reach area and the maximum reach area between.
  • the update module 4553 is further configured to acquire the frequency of the buttons included in the virtual scene being touched by mistake, and when the frequency is greater than a frequency threshold, determine that the adjustment of the buttons included in the virtual scene will be performed size operation.
  • the updating module 4553 is further configured to acquire the first button and the second button included in the virtual scene, wherein the area corresponding to the size of the first button is smaller than the minimum touch area, The area corresponding to the size of the second button is larger than the maximum reach area; the size of the first button is increased, wherein the area corresponding to the increased size of the first button is located at the minimum reach area area and the maximum reach area; reduce the size of the second button, wherein the area corresponding to the reduced size of the second button is located between the minimum reach area and the maximum reach area between areas.
  • the update module 4553 is further configured to obtain a zoom ratio for the buttons included in the virtual scene, wherein the zoom ratio is used to adjust the size of all buttons included in the virtual scene, and the adjusted The area corresponding to the size is located between the minimum touch area and the maximum touch area; the size of the button included in the virtual scene is adjusted according to the zoom ratio.
  • the update module 4553 is further configured to display a zoom ratio selection area when the zoom ratio is a plurality of zoom ratio values, wherein the zoom ratio selection area includes the plurality of zoom ratio values; in response to For a selection operation for a plurality of zoom ratio values included in the zoom ratio selection area, the size of the button included in the virtual scene is adjusted according to the selected zoom ratio value.
  • the display device for adapting a virtual scene has the following beneficial effects: through a plurality of buttons of different sizes in the virtual scene, the touch area that can be achieved by the touch subject is detected, and the content included in the virtual scene is adjusted.
  • the size of the button is adapted to the touch area that can be achieved by the touch subject, so that the size of the button can be adjusted with efficient human-computer interaction operation, and the efficiency of human-computer interaction in the virtual scene can be improved.
  • the resource consumption of the interaction-related computations is significantly saved.
  • Embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above-mentioned method for adapting and displaying the virtual scene in the embodiment of the present application.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when the executable instructions are executed by a processor, the processor will cause the processor to execute the virtual scene provided by the embodiments of the present application.
  • the adaptive display method is, for example, the adaptive display method of the virtual scene as shown in FIGS. 4A-4C .
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the foregoing memories Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and which Deployment may be in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, a Hyper Text Markup Language (HTML, Hyper Text Markup Language) document
  • HTML Hyper Text Markup Language
  • One or more scripts in stored in a single file dedicated to the program in question, or in multiple cooperating files (eg, files that store one or more modules, subroutines, or code sections).
  • executable instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or alternatively, distributed across multiple sites and interconnected by a communication network execute on.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Input From Keyboards Or The Like (AREA)
  • Processing Or Creating Images (AREA)
  • Digital Computer Display Output (AREA)

Abstract

一种虚拟场景的适配显示方法、装置、电子设备、计算机可读存储介质及计算机程序产品;方法包括:显示虚拟场景以及多个不同尺寸的按钮;响应于针对多个不同尺寸的按钮的触控操作,获取触控操作对应的触达面积;更新显示虚拟场景,其中,更新后的虚拟场景中包括的按钮的尺寸与触控操作对应的触达面积相适配。

Description

虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
相关申请的交叉引用
本申请实施例基于申请号为202011620155.7、申请日为2020年12月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请实施例作为参考。
技术领域
本申请涉及计算机人机交互技术,尤其涉及一种虚拟场景的适配显示方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在军事演习仿真、以及游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
其中,虚拟场景中的按钮被广泛应用,例如具有攻击功能的按钮,具有操作虚拟对象运动的摇杆按钮等,通过点击、按压、滑动等操作运用虚拟场景中的按钮,以实现相应的功能。
相关技术中,为了方便使用,在虚拟场景运行前,需要用户手动地逐个调整虚拟场景中的各个按钮的尺寸,以使调整后的按钮在虚拟场景运行时便于用户操作,但是,这种频繁的调整方式过于繁琐,影响了虚拟场景中人机交互的效率,进而影响使用体验。
发明内容
本申请实施例提供一种虚拟场景的适配显示方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够实现自动化调整虚拟场景中按钮的尺寸,提高虚拟场景中人机交互的效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景的适配显示方法,由电子设备执行,所述方法包括:
显示虚拟场景以及多个不同尺寸的按钮;
响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积;
更新显示所述虚拟场景,其中,更新后的所述虚拟场景中包括的按钮的尺寸与所述触控操作对应的触达面积相适配。
本申请实施例提供一种虚拟场景的适配显示装置,包括:
显示模块,配置为显示虚拟场景以及多个不同尺寸的按钮;
处理模块,配置为响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积;
更新模块,配置为更新显示所述虚拟场景,其中,更新后的所述虚拟场景中包括的按钮的尺寸与所述触控主体能够实现的触达面积相适配。
本申请实施例提供一种用于适配显示的电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于通过执行所述存储器中存储的可执行指令,实现本申请实施例提供的虚拟场景的适配显示方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟场景的适配显示方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现本申请实施例提供的虚拟场景的适配显示方法。
本申请实施例具有以下有益效果:
通过虚拟场景中的多个不同尺寸的按钮,检测触控操作对应的触达面积,调整虚拟场景中包括的按钮的尺寸以与触控操作对应的触达面积相适配,从而能够以高效率的人机交互操作调整按钮的尺寸,提高虚拟场景中人机交互的效率,同时图形处理硬件进行人机交互的相关计算的资源消耗得以显著节约。
附图说明
图1A-图1D是相关技术提供的按钮尺寸调整的界面示意图;
图2A是本申请实施例提供的虚拟场景的适配显示方法的应用模式示意图;
图2B是本申请实施例提供的虚拟场景的适配显示方法的应用模式示意图;
图3A是本申请实施例提供的用于适配显示的电子设备的结构示意图;
图3B是本申请实施例提供的虚拟场景的适配显示装置中安装的人机交互引擎的原理示意图;
图4A-图4C是本申请实施例提供的虚拟场景的适配显示方法的流程示意图;
图5是本申请实施例提供的按钮尺寸检测的界面示意图;
图6是本申请实施例提供的按钮适配检测入口的界面示意图;
图7是本申请实施例提供的按钮尺寸检测的界面示意图;
图8是本申请实施例提供的最大按钮尺寸检测的界面示意图;
图9是本申请实施例提供的确认界面示意图;
图10是本申请实施例提供的离散的缩放比例值选择界面示意图;
图11是本申请实施例提供的连续的缩放比例值选择界面示意图;
图12是本申请实施例提供的自适应调整游戏UI界面的流程示意图;
图13A-图13E是本申请实施例提供的按钮尺寸调整的界面示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,所涉及的术语“第一\第二”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进 行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景:利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于:用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)客户端:终端中运行的用于提供各种服务的应用程序,例如游戏客户端等、军事演习仿真客户端。
4)虚拟对象:虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如,在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
例如,该虚拟对象可以是通过客户端上的操作进行控制的用户角色,也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence),还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character)。例如,该虚拟对象可以是在虚拟场景中进行对抗式交互的虚拟人物。例如,该虚拟场景中参与互动的虚拟对象的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
以射击类游戏为例,用户可以控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也可以控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也可以控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具可以是虚拟汽车、虚拟飞行器、虚拟游艇等,在此仅以上述场景进行举例说明,本申请实施例对此不作具体限定。用户也可以控制虚拟对象通过虚拟道具与其他虚拟对象进行对抗式的交互,例如,该虚拟道具可以是手雷、集束雷、粘性手雷等投掷类虚拟道具,也可以是机枪、手枪、步枪等射击类虚拟道具,本申请对虚拟道具的类型不作具体限定。
5)场景数据:表示虚拟场景中的对象在交互过程中受所表现的各种特征,例如,可以包括对象在虚拟场景中的位置。当然,根据虚拟场景的类型可以包括不同类型的特征;例如,在游戏的虚拟场景中,场景数据可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏角色的各种状态的属性值,例如包括生命值(也称为红量)和魔法值(也称为蓝量)等。
6)触达面积:又称接触面积,用户(触控主体)通过接触某物体,用户能够触达该物体的面积,即用户在该物体的触摸热区面积。
在游戏过程中,通常会遇到不同玩家手指尺寸不同的问题,例如亚洲玩家的手指偏纤细,其整个用户界面(UI,User Interface)界面的按钮尺寸也会偏小,而欧美玩家天生身材高大,手指相对偏大,若仍采用偏小的按钮则会造成操作体验不佳,点击不符合预期,容易误触等一系列问题。
相关技术中,按钮尺寸调整的操作流程如下:当玩家处于战斗界面时,1)如图1A所示,点击小地图101左侧的“设置”按钮102,进入如图1B所示的设置页面;2)如图1B所示,点击列表中的操作设置页签103,进入如图1C所示的界面调整界面;3)如图1C所示,点击“自定义布局”按钮104,进入自定义调整界面;4)点击需要调整尺寸的某个按钮,进入如图1D所示的自定义调节界面;5)如图1D所示,利用界面中 的滑条105进行尺寸的调整;6)完成调整后,点击保存按钮,即完成按钮尺寸的设置。
然而,相关技术中按钮尺寸调整的操作层级较深,对于新手或者休闲玩家而言,需要较高的理解成本,并且难以自行学习和调节到最适合的按钮尺寸。即使玩家了解调整的方法,仍然需要玩家针对单个按钮进行复杂的调整,因此会造成较长时间进行按钮尺寸调整,会造成不佳的用户体验。
为了解决上述问题,本申请实施例提供一种虚拟场景的适配显示方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够实现自动化调整虚拟场景中按钮的尺寸,提高虚拟场景中人机交互的效率。
下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端,也可以实施为服务器。下面,将说明设备实施为终端时示例性应用。
为便于更容易理解本申请实施例提供的虚拟场景的适配显示方法,首先说明本申请实施例提供的虚拟场景的适配显示方法的示例性实施场景,虚拟场景可以完全基于终端输出,或者基于终端和服务器的协同来输出。
在一些实施例中,虚拟场景可以是军事演习仿真中所呈现的画面,用户可以在虚拟场景中,通过属于不同团队的虚拟对象来模拟战局、战略或战术,对于军事作战的指挥有着很大的指导作用。
在一些实施例中,虚拟场景可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战,通过控制虚拟对象的行动可以在虚拟场景中进行双方互动,从而使用户能够在游戏的过程中舒缓生活压力。
在一个实施场景中,参见图2A,图2A是本申请实施例提供的虚拟场景的适配显示方法的应用模式示意图,适用于一些完全依赖终端400的计算能力即可完成虚拟场景100的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等终端400完成虚拟场景的输出。
当形成虚拟场景100的视觉感知时,终端400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,设备还可以借助不同的硬件来形成听觉感知、触觉感知、运动感知和味觉感知的一种或多种。
作为示例,终端400运行客户端410(例如单机版的游戏应用),在客户端410的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景中包括第一虚拟对象110和虚拟道具120,第一虚拟对象110可以是受用户(或称玩家)控制的游戏角色,即第一虚拟对象110受控于真实用户,将响应于真实用户针对按钮(包括摇杆按钮、攻击按钮、防御按钮等)的操作而在虚拟场景中操作,例如当真实用户向左移动摇杆按钮时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟道具120可以是在虚拟场景中被第一虚拟对象110使用的对战工具,例如,第一虚拟对象110可以通过移动摇杆按钮拾取虚拟场景中的虚拟道具120,从而使用虚拟道具120的功能进行游戏对战。
举例来说,通过用户对客户端410显示的多个不同尺寸的按钮进行触控操作,以确定触控操作对应的触达面积,并基于触控操作对应的触达面积调整虚拟场景中包括的按钮(例如摇杆按钮130)的尺寸,以使调整后的按钮的尺寸与触控操作对应的触达面积 相适配,以基于调整后的按钮进行后续的人机交互,例如,用户通过调整后的控制摇杆按钮130控制第一虚拟对象110移动至虚拟场景中的虚拟道具120时,以拾取虚拟场景中的虚拟道具120,从而无需用户手动地逐个调整虚拟场景中的各个按钮的尺寸,以高效率的人机交互操作调整按钮的尺寸,提高虚拟场景中人机交互的效率。
在另一个实施场景中,参见图2B,图2B是本申请实施例提供的虚拟场景的适配显示方法的应用模式示意图,应用于终端400和服务器200,适用于依赖服务器200的计算能力完成虚拟场景计算、并在终端400输出虚拟场景的应用模式。
以形成虚拟场景100的视觉感知为例,服务器200进行虚拟场景相关显示数据的计算并发送到终端400,终端400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端的相应硬件输出,例如使用麦克风输出形成听觉感知,使用振动器输出形成触觉感知等等。
作为示例,终端400运行客户端410(例如网络版的游戏应用),通过连接游戏服务器(即服务器200)与其他用户进行游戏互动,终端400输出客户端410的虚拟场景100,其中包括第一虚拟对象110和虚拟道具120,第一虚拟对象110可以是受用户控制的游戏角色,即第一虚拟对象110受控于真实用户,将响应于真实用户针对按钮(例如摇杆按钮、攻击按钮、防御按钮等)的操作而在虚拟场景中操作,例如当真实用户向左移动摇杆时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟道具120可以是在虚拟场景中被第一虚拟对象110使用的对战工具,例如,第一虚拟对象110可以通过移动摇杆按钮拾取虚拟场景中的虚拟道具120,从而使用虚拟道具120的功能进行游戏对战。
举例来说,通过用户对客户端410显示的多个不同尺寸的按钮进行触控操作,客户端410将用户的触控操作通过网络300发送至服务器200,服务器200根据用户的触控操作,确定触控操作对应的触达面积,并基于触控操作对应的触达面积调整虚拟场景中包括的按钮(例如摇杆按钮130)的尺寸,以使调整后的按钮的尺寸与触控操作对应的触达面积相适配,并将调整后的按钮发送至客户端410,客户端410接收到调整后的按钮后,呈现调整后的按钮(例如摇杆按钮130),以基于调整后的按钮进行后续的人机交互,例如,用户通过调整后的控制摇杆按钮130控制第一虚拟对象110移动至虚拟场景中的虚拟道具120时,以拾取虚拟场景中的虚拟道具120,从而能够以高效率的人机交互操作调整按钮的尺寸,提高虚拟场景中人机交互的效率。
在一些实施例中,终端400可以通过运行计算机程序来实现本申请实施例提供的虚拟场景的适配显示方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序,例如游戏APP(即上述的客户端410);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
作为示例,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的 服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、以及智能手表等,但并不局限于此。终端400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
参见图3A,图3A是本申请实施例提供的用于适配显示的电子设备的结构示意图,以电子设备为终端为例进行说明,图3A所示的电子设备包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。电子设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3A中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450例如包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他计算设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景的适配显示装置可以采用软件方式实现,图2A示出了存储在存储器450中的虚拟场景的适配显示装置455,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块4551、处理模块4552、更新模块4553以及设置模块4554,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,将在下文中说明各个模块的功能。
参见图3B,图3B是本申请实施例提供的虚拟场景的适配显示装置中安装的人机 交互引擎的原理示意图,以应用于游戏为例,又可以称为游戏引擎,游戏引擎是指一些已编写好的可编辑电脑游戏系统或者一些交互式实时图像应用程序的核心组件,这些系统为游戏设计者提供各种编写游戏所需的各种工具,其目的在于让游戏设计者能容易和快速地做出游戏程式而不用由零开始,游戏引擎包括:渲染引擎(即“渲染器”,含二维图像引擎和三维图像引擎)、物理引擎、障碍物检测系统、音效、脚本引擎、电脑动画、人工智能、网络引擎以及场景管理,游戏引擎是一个为运行某一类游戏的机器设计的能够被机器识别的代码(指令)集合,它像一个发动机,控制着游戏的运行,一个游戏程序可以分为游戏引擎和游戏资源两大部分,游戏资源包括图像,声音,动画等部分,游戏=引擎(程序代码)+资源(图像,声音,动画等),游戏引擎则是按游戏设计的要求顺序地调用这些资源。
本申请实施例提供的虚拟场景的适配显示方法是由图3A中所示出的虚拟场景的适配显示装置中的各个模块通过调用图3B所示出的人机交互引擎的相关组件实现的,下面示例性说明。
例如,显示模块4551用于显示虚拟场景以及多个不同尺寸的按钮,显示模块4551调用图3B所示游戏引擎中的用户界面部分实现用户与游戏之间的交互,通过调用游戏引擎中的模型部分制作二维或者三维模型,并在模型制作完毕之后,通过骨骼动画部分按照不同的面把材质贴图赋予模型,这相当于为骨骼蒙上皮肤,最后再通过渲染部分将模型、动画、光影、特效等所有效果实时计算出来并展示在人机交互界面上。
例如,处理模块4552用于响应于针对多个不同尺寸的按钮的触控操作,获取触控操作对应的触达面积,并调用更新模块4553,更新显示虚拟场景,通过渲染模块对虚拟场景中包括的按钮进行渲染并展示在人机交互界面上,使得人机交互界面上更新后的虚拟场景中包括的按钮的尺寸与触控操作对应的触达面积相适配。
例如,设置模块4553用于在虚拟场景中显示按钮适配检测入口;响应于针对按钮适配检测入口的触发操作,设置模块4555调用图3B所示游戏引擎中的渲染模块,通过渲染模块对按钮适配检测入口进行渲染并展示在人机交互界面上。
如前所述,本申请实施例提供的虚拟场景的适配显示方法可以由各种类型的电子设备实施,例如终端。参见图4A,图4A是本申请实施例提供的虚拟场景的适配显示方法的流程示意图,结合图4A示出的步骤进行说明。
在下面步骤中,多个不同尺寸的按钮可以是与虚拟场景中的交互功能(例如选择角色,控制角色)关联的功能按钮,例如控制虚拟对象运动的摇杆按钮、控制虚拟对象攻击其他虚拟对象的攻击按钮等,从而触控主体在操作多个不同尺寸的功能按钮的过程中,电子设备响应于针对功能按钮的触控操作,学习到触控主体能够实现的触达面积(即触控操作对应的触达面积),然后基于触控主体能够实现的触达面积自动或在用户确认需要更新后,更新虚拟场景,使得更新后的虚拟场景中包括的按钮的尺寸与用户能够实现的触达面积相适配。
在下面步骤中,多个不同尺寸的按钮还可以是与虚拟场景中的交互功能(例如选择角色,控制角色)无关的专用于检测触达面积的按钮,例如专用于检测触达面积的按钮的检测按钮等,从而用户在操作多个不同尺寸的检测按钮的过程中,电子设备响应于针对检测按钮的触控操作,获取触控操作对应的触达面积,然后基于触控操作对应的触达面积自动或在用户确认需要更新后,更新虚拟场景,使得更新后的虚拟场景中包括的功能按钮的尺寸与触控操作对应的触达面积相适配。
需要说明的是,触控主体是指能够实现触控的对象,例如触控主体是真实的用户,用户通过手指触控电子设备上显示的按钮,从而通过电子设备的传感器检测手指触控按钮时的触达面积;触控主体是具有触控功能的手套,用户通过手套触控电子设备上显示 的按钮,从而通过电子设备的传感器检测手套触控按钮时的触达面积。
需要说明的是,触控主体能够实现的触达面积(即触控操作对应的触达面积)是通过电子设备上显示屏对应的传感器检测到的,其中传感器包括多个传感单元,当触控主体在触控多个不同尺寸的按钮的过程中,触控主体将触发传感器中的多个触控单元,电子设备基于被触发的传感单元的数量,换算触控操作对应的触达面积,从而响应于针对按钮的触控操作,计算得到触控操作对应的触达面积。
在步骤101中,显示虚拟场景以及多个不同尺寸的按钮。
例如,多个不同尺寸的按钮均与虚拟场景中的交互功能(例如选择角色、控制角色)关联,其中,多个不同尺寸的按钮可以是功能相同或不同的按钮。在虚拟场景的运行过程中,在虚拟场景中显示多个不同尺寸的按钮,后续响应于针对在虚拟场景中显示的多个不同尺寸的按钮的触控操作,以获取触控主体针对按钮能够实现的触达面积,从而能够在虚拟场景运行的过程,通过触控主体实时操作,获取触控主体针对按钮能够实现的触达面积,无需额外生成与虚拟场景中的交互功能无关的按钮,以检测触控主体针对按钮能够实现的触达面积,节约人机交互的相关计算的资源消耗。
例如,多个不同尺寸的按钮均与虚拟场景中的交互功能(例如选择角色、控制角色)无关,例如专用于检测触达面积原型按钮等。在触控主体进入虚拟场景后,显示虚拟场景,并显示独立于虚拟场景的适配检测区域,该适配检测区域可以通过分屏或浮层显示,其中,适配检测区域中包括与虚拟场景中的交互功能无关的多个不同尺寸的按钮,例如,如图5所示,图5中的适配检测区域501通过浮层显示,与虚拟场景500区分开,适配检测区域501包括220px(像素)的按钮502、120px的按钮503、75px的按钮504以及45px的按钮505,后续响应于针对适配检测区域中多个不同尺寸的按钮的触控操作,暂停或继续虚拟场景中的交互过程,并获取触控主体能够实现的触达面积。其中,通过适配检测区域检测触控主体能够实现的触达面积的过程中,虚拟场景的进行可以随时暂停,或者持续进行。
承接上述示例,在显示独立于虚拟场景的适配检测区域之前,在虚拟场景中显示按钮适配检测入口;响应于针对按钮适配检测入口的触发操作,确定将执行显示独立于虚拟场景的适配检测区域的操作。
如图6所示,在触控主体进入虚拟场景后,在虚拟场景中显示按钮适配检测入口601,在触控主体通过点击或按压等操作触发按钮适配检测入口601后,将显示如图5所示的独立于虚拟场景的适配检测区域501。
需要说明的是,本申请实施例对触发操作不做限定,例如,触发操作可以是点击操作,还可以是长按操作等触摸式操作。
承接上述示例,在虚拟场景中显示按钮适配检测入口之前,获取虚拟场景中包括的按钮被触控主体误触的频率;当频率大于频率阈值时,确定将执行在虚拟场景中显示按钮适配检测入口的操作。
例如,在触控主体进入虚拟场景后,触控主体在虚拟场景运行的过程中,频繁出现失误时,即虚拟场景中包括的按钮被触控主体误触的频率大于频率阈值时,则自动弹出如图5所示的适配检测区域501。
在步骤102中,响应于针对多个不同尺寸的按钮的触控操作,获取触控操作对应的触达面积。
其中,触控主体能够实现的触达面积的类型包括最小触达面积以及最大触达面积。在获取触控主体能够实现的最小触达面积以及最大触达面积之后,在虚拟场景显示最小触达面积以及最大触达面积。
通过在虚拟场景中显示多个不同尺寸的按钮,并通过多个不同尺寸的按钮检测触控 主体能够实现的触达面积,以便后续基于触控主体能够实现的触达面积调整虚拟场景中包括的按钮的尺寸,从而无需手动调整虚拟场景中包括的按钮,减小繁琐的手动操作,实现自动化调整虚拟场景中按钮的尺寸,提高虚拟场景中人机交互的效率。
参见图4B,图4B是本申请实施例提供的虚拟场景的适配显示方法的一个可选的流程示意图,图4B示出图4A中还包括步骤104,步骤102可通过步骤1021实现,其中,触控主体能够实现的触达面积的类型包括最小触达面积:在步骤104中,按照多个不同尺寸的按钮的尺寸从大到小的顺序,在适配检测区域中依次显示多个不同尺寸的按钮;在步骤1021中,响应于依次针对多个不同尺寸的按钮的触控操作,获取触控操作对应的最小触达面积。
如图5所示,在适配检测区域501中从左到右依次显示220px的按钮502、120px的按钮503、75px的按钮504以及45px的按钮505,即按钮502、按钮503、按钮504以及按钮505按照按钮的尺寸从大到小进行排序,触控主体从左到右依次点击按钮502、按钮503、按钮504以及按钮505,从而在点击按钮502、按钮503、按钮504以及按钮505的过程中,获取触控主体能够实现的最小触达面积。
在一些实施例中,获取触控操作对应的最小触达面积,包括:针对多个不同尺寸的按钮中的任一尺寸的按钮执行以下处理:当任一尺寸的按钮被误触的次数大于误触阈值时,将多个尺寸中满足预设条件的尺寸确定为触控操作对应的最小触达面积;其中,预设条件包括:与任一尺寸相邻且大于任一尺寸。
如图5所示,当在检测触达面积的过程中,某尺寸的按钮被触控主体误触的次数大于误触阈值时,将与该尺寸相邻且大于该尺寸的尺寸确定为触控主体能够实现的最小触达面积,例如,按钮505被连续误触2次,则确定45px的尺寸的按钮被误触的次数大于误触阈值(例如,误触阈值设置为1),则将与该45px相邻且大于该45px的尺寸(即75px)作为触控主体能够实现的最小触达面积。
如图7所示,同一尺寸的按钮包含多个,例如,45px的按钮包含按钮702、按钮703以及按钮704,当在检测触达面积的过程中,某尺寸的按钮被触控主体误触的次数大于误触阈值时,将与该尺寸相邻且大于该尺寸的尺寸确定为触控主体能够实现的最小触达面积,例如,按钮702以及按钮704在检测的过程中被误触,则确定45px的尺寸的按钮被误触的次数大于误触阈值(例如,误触阈值设置为1),则将与该45px相邻且大于该45px的尺寸(即75px)作为触控主体能够实现的最小触达面积。
在一些实施例中,触达面积的类型包括最大触达面积;获取触控操作对应的触达面积,包括:获取施加在按钮的指纹,并将指纹的面积作为触控操作对应的最大触达面积。
如图8所示,在检测触达面积的过程中,获取触控主体施加在按钮801的指纹802,指纹802的中心为中心803,将中心803与指纹边缘的最大距离804作为指纹的半径r1,将该半径为r1的圆的面积(即指纹的面积)作为触控主体能够实现的最大触达面积,或者将中心803与指纹边缘的最小距离805作为指纹的半径r2,将该半径为r2的圆的面积(即指纹的面积)作为触控主体能够实现的最大触达面积。
例如,在检测触达面积的过程中,获取触控主体多次按压按钮时对应的多个指纹,将多个指纹中最大面积的指纹作为触控主体施加在按钮的指纹。
承接上述示例,获取施加在按钮的指纹,包括:获取多次按压按钮时对应的多个指纹;针对多个指纹中的任一指纹执行以下处理:当指纹的面积在按压过程中保持不变的时间超出时间阈值时,将指纹作为施加在按钮的指纹。
例如,由于触控主体在按压按钮时不稳定,则获取触控主体多次按压按钮时对应的多个指纹不稳定,可能获取到并不是在触控主体舒适的情况下的指纹,即该情况下所获取的最大触达面积不符合触控主体的使用习惯,容易造成误触的情况。
为了获取符合触控主体的使用习惯的指纹,在检测触达面积的过程中,当某指纹的面积保持不变的时间超出时间阈值时,将该指纹作为触控主体施加在按钮的指纹,从而获得稳定的指纹,即在触控主体舒适的情况下获取的指纹,该情况下所获取的最大触达面积符合触控主体的使用习惯,提高用户体验感。
在步骤103中,更新显示虚拟场景,其中,更新后的虚拟场景中包括的按钮的尺寸与触控操作对应的触达面积相适配。
例如,通过多个不同尺寸的按钮检测触控主体能够实现的触达面积后,基于触控主体能够实现的触达面积调整虚拟场景中包括的按钮的尺寸,从而无需手动调整虚拟场景中包括的按钮,减小繁琐的手动操作,实现自动化调整虚拟场景中按钮的尺寸,提高虚拟场景中人机交互的效率。
其中,触控主体能够实现的触达面积的类型包括最小触达面积以及最大触达面积。如图9所示,在确定最小触达面积以及最大触达面积后,可以在虚拟场景显示确认界面,该确认界面可以是独立于虚拟场景的界面,该确认界面包括最小触达面积901以及最大触达面积902,触控主体点击“确定并生成”按钮903后,响应于触控主体针对按钮903的触发操作,调整虚拟场景中包括的按钮的尺寸,以使调整后的按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间,从而无需手动一一调整虚拟场景中包括的按钮。
承接上述示例,在调整虚拟场景中包括的按钮的尺寸之前,获取虚拟场景中包括的按钮被触控主体误触的频率,当频率大于频率阈值时,确定将执行调整按钮的尺寸的操作。
例如,并不是虚拟场景中包括的所有按钮都需要调整,只有按钮在虚拟场景运行的过程中,频繁出现误触时,才调整按钮的尺寸,即定时获取虚拟场景中包括的按钮被触控主体误触的频率,当频率大于频率阈值时,则判定按钮频繁出现误触,确定该按钮需要执行调整按钮的尺寸的操作。
参见图4C,图4C是本申请实施例提供的虚拟场景的适配显示方法的一个可选的流程示意图,图4C示出图4A中的步骤103可通过步骤1031-步骤1033实现,其中,触达面积的类型包括最小触达面积:在步骤1031中,获取虚拟场景中包括的第一按钮以及第二按钮;在步骤1032中,调大第一按钮的尺寸,以使调大后的第一按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间;在步骤1033中,调小第二按钮的尺寸,以使调小后的第二按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间。
其中,第一按钮的尺寸对应的面积小于最小触达面积,第二按钮的尺寸对应的面积大于最大触达面积。通过将尺寸较小的按钮的尺寸调大、将尺寸较大的按钮的尺寸调小,而虚拟场景中按钮的尺寸已位于最小触达面积以及最大触达面积之间的按钮不进行按钮尺寸调整,从而使得虚拟场景中所有的按钮位于最小触达面积以及最大触达面积之间,在虚拟场景运行时,虚拟场景中的按钮的尺寸符合触控主体的使用习惯,减小误触的情况,提高用户体验感。
例如,当虚拟场景中包括的第一按钮的尺寸小于最小触达面积时,调整第一按钮的尺寸,以使调整后的第一按钮的尺寸为最小触达面积,当虚拟场景中包括的第二按钮的尺寸大于最大触达面积时,调整第二按钮的尺寸,以使调整后的第二按钮的尺寸为最大触达面积。
在一些实施例中,调整虚拟场景中包括的按钮的尺寸,包括:获取针对虚拟场景中包括的按钮的缩放比例,其中,缩放比例用于调整虚拟场景中包括的所有按钮的尺寸,以使调整后的尺寸对应的面积位于最小触达面积以及最大触达面积之间;按照缩放比例 调整虚拟场景中包括的按钮的尺寸。
例如,在获取最小触达面积以及最大触达面积之后,确定虚拟场景中所有按钮的原有尺寸,并通过计算确定出调整虚拟场景中包括的所有按钮的尺寸,以使调整后的尺寸对应的面积位于最小触达面积以及最大触达面积之间的缩放比例,并按照缩放比例调整虚拟场景中包括的按钮的尺寸,使得虚拟场景中调整后的尺寸对应的面积位于最小触达面积以及最大触达面积之间,从而自动化调整虚拟场景中所有按钮的尺寸,提高虚拟场景中人机交互的效率。
在一些实施例中,当缩放比例为多个缩放比例值时,按照缩放比例调整虚拟场景中包括的按钮的尺寸,包括:显示缩放比例选择区域,其中,缩放比例选择区域中包括多个缩放比例值;响应于针对缩放比例选择区域中包括的多个缩放比例值的选择操作,按照被选中的缩放比例值调整虚拟场景中包括的按钮的尺寸。
需要说明的是,本申请实施例对选择操作不做限定,例如,选择操作可以是点击操作,还可以是长按操作等。
如图10所示,当存在多个离散的缩放比例值时,例如0.5、0.7以及1.2,可以在虚拟场景显示缩放比例选择区域1001,该缩放比例选择区域1001可以是独立于虚拟场景的界面,该选择界面1001包括多个缩放比例,例如0.5、0.7以及1.2,触控主体可以选择0.5、0.7以及1.2其中的一个,从而响应于触控主体针对缩放比例的触发操作,按照选择的缩放比例调整虚拟场景中包括的按钮的尺寸,以使调整后的按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间,从而无需手动一一调整虚拟场景中包括的按钮。
如图11所示,当存在连续的缩放比例值时,例如50%~120%,可以在虚拟场景显示缩放比例选择区域1101,该缩放比例选择区域1101可以是独立于虚拟场景的界面,该缩放比例选择区域1101包括连续的缩放比例值的可调节滑条1102,触控主体通过滑动滑条1102选择50%~120%中的一个缩放比例,从而响应于针对可调节滑条1102的滑动操作,将可调节滑条中被选定的比例,例如75%作为针对虚拟场景中包括的按钮的缩放比例,按照选择的缩放比例调整虚拟场景中包括的按钮的尺寸,以使调整后的按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间,从而无需手动一一调整虚拟场景中包括的按钮。
下面,将说明本申请实施例在一个实际的游戏应用场景中的示例性应用。
在游戏过程中,通常会遇到不同玩家手指尺寸不同的问题,例如亚洲玩家的手指偏纤细,其整个用户界面(UI,User Interface)界面的按钮尺寸也会偏小,而欧美玩家天生身材高大,手指相对偏大,若仍采用偏小的按钮则会造成操作体验不佳,点击不符合预期,容易误触等一系列问题。
然而,相关技术中按钮尺寸调整的操作层级较深,对于新手或者休闲玩家而言,需要较高的理解成本,并且难以自行学习和调节到最适合的按钮尺寸。即使玩家了解调整的方法,仍然需要玩家针对单个按钮进行复杂的调整,因此会造成较长时间进行按钮尺寸调整,会造成不佳的用户体验。
为了解决上述问题,本申请实施例提出一种虚拟场景的适配显示方法,通过获取玩家的生物特征数据,使玩家调节操作按钮大小这一过程得以自动化,一定程度地降低了玩家的理解成本,提升了调整的速度,因此提升了用户体验。
本申请实施例提出的虚拟场景的适配显示方法包括2个步骤,分别为检测交互以及生成交互,通过检测交互针对不同玩家的生物特征进行检测,通过生成交互生成对应UI界面,最终可以让新的UI界面更匹配玩家的手指,减少误触,提升游戏体验。
其中,检测交互的方式如下:玩家在进入一个可定制的UI界面时,例如游戏中的 战斗界面,自动弹出并显示一个检测玩家手指尺寸的界面,该界面包含一系列从大到小尺寸的按钮,玩家可以通过点击按钮,记录手指数据;当玩家无法舒适地点击到某个尺寸的按钮时,即可获取该玩家的最小可触达按钮(即最小可触达面积),因此,游戏中所有的UI按钮不应小于此按钮,避免玩家无法触达。玩家完成最小可触达按钮的检测后,进入第二个步骤,该步骤用于检测玩家常用手指(当某手指的使用频率大于阈值时,该手指即为常用手指)的最大可触达面积,例如射击类游戏中常用的左右手拇指,玩家通过按住屏幕中心的圆形图案,直到无法继续触达更大的面积时,即可记录玩家最大可触达面积。
其中,生成交互的方式如下:根据玩家的最小可触达面积和最大可触达面积,计算出一个触达面积区间,UI界面中的可触按钮面积不可超过此区间范围,若UI界面中的可触按钮面积不符合该区间,则可根据区间自动调整可触按钮面积,以完成全套游戏UI界面中所有按钮尺寸的调整。
下面具体说明本申请实施例提出的虚拟场景的适配显示方法,主要分为3个部分:检测玩家手指可触达的最小面积;检测玩家手指可触达的最大面积;3.根据最小面积和最大面积自动生成全套的可交互界面。具体步骤如图12所示:
步骤1、如图13A所示,显示战斗界面,该界面是可调节的UI界面。
步骤2、点击图13A中的设置按钮1301,自动显示如图13B所示的检测用的弹窗,进入最小按钮尺寸检测。
步骤3、玩家在进行最小按钮尺寸检测的过程中,依次点击图13B中从大到小排列的按钮,通过识别触摸热区面积中心触发按钮,以判定点击按钮成功,当出现无法点击的情况时,检测终止,记录可触达的最小按钮面积。
其中,关于最小按钮尺寸检测,点击检测方式为:确定手指和屏幕接触面的图形面积中心点是否在按钮区域内,若图形面积中心点在按钮区域内时,则确定点击成功;相反,若图形面积中心点在按钮区域外时,则确定点击失败,按钮功能不触发。
依据以上技术原理,按照图13B进行最小按钮尺寸检测,需要从大到小依次点击堆叠的三个原型按钮(使用三个为一组的堆叠按钮检测方法,避免了单一按钮检测的误差),若点击成功则显示原型按钮为绿色,反之,若点击失败则显示原型按钮为红色;玩家需要完成三个堆叠按钮的成功点击,才可以进入下一组更小按钮的检测。其中,在较小的堆叠的按钮中,更容易遇到点击不生效,或者点击失败的情况,如图13B所示,点击45px尺寸的按钮时失败。当点击失败时,则不显示后续更小尺寸的按钮,并且记录前一个完全点击成功的按钮对应的尺寸为最小按钮尺寸,例如将75px作为最小按钮尺寸。
步骤4、完成最小按钮尺寸检测后,点击图13B中的“下一步”按钮1302,进入最大尺寸面积检测。
步骤5、如图13C所示,在最大尺寸面积检测步骤中,需要尽可能多地覆盖屏幕中的区域1303,直到无法覆盖为止,此时记录可触达的最大按钮面积,并在触摸区域通过高亮标记显示触摸面积,点击图13C中的“下一步”按钮1304,进入如图13D所示的确认界面。
其中,关于最大按钮尺寸检测,触屏检测方式为:利用一块固定的可触区域作为图形检测底板,玩家需要使用常用的手指尽可能多的覆盖底板,屏幕在检测到触摸的时候,将正在被触的区域1305渲染成绿色,用于表示玩家实际的可触达区域。若区域内没有检测到任何触达,则保持底板颜色不变。同时根据玩家成功触达的图形几何中心点为圆心,计算触达图形的边缘到圆心的距离r为半径,根据r的值可以计算出一个圆形面积,记录该面积为玩家的可触达最大按钮尺寸。
步骤6、如图13D所示,显示最小按钮面积和最大按钮面积,点击图13D中的“确 认并生成”按钮1306,即可开始界面生成。
如图13D所示,最小按钮面积记录为最小值A(如图13D所示的75px),最小按钮面积记录为最大值B(如图13D所示的300*300px)。
步骤7、界面生成,流程结束。
如图13A所示,战斗界面的构成是由大小不同的各种按钮所构成的,其中,包括右上角小地图附近的一部分较小尺寸的按钮和图中最大尺寸的按钮,例如移动摇杆和攻击按钮。在确认了根据玩家生物特征记录的最小按钮面积和最大按钮面积后,需要检测现有UI界面中全部按钮的尺寸符合[A,B],即不小于A,并且不大于B。若有按钮的尺寸不符合[A,B],例如某按钮尺寸小于A,则将该按钮的尺寸自动调整为A,直至所有按钮的尺寸符合[A,B],则界面生成完毕,例如生成如图13E所示的界面,例如,攻击按钮1307的尺寸自动调整为B。
综上,本申请实施例通过检测按钮尺寸的方式直接地获取玩家的生物特征数据,根据生物特征数据可以定制出整套UI界面,提升了玩家调节自定义界面的效率,增强了用户体验。
至此已经结合本申请实施例提供的终端的示例性应用和实施,说明本申请实施例提供的虚拟场景的适配显示方法,下面继续说明本申请实施例提供的虚拟场景的适配显示装置455中各个模块配合实现虚拟场景的适配显示的方案。
显示模块4551,配置为显示虚拟场景以及多个不同尺寸的按钮;处理模块4552,配置为响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积;更新模块4553,配置为更新显示所述虚拟场景,其中,更新后的所述虚拟场景中包括的按钮的尺寸与所述触控操作对应的触达面积相适配。
在一些实施例中,所述显示模块4551还配置为在所述虚拟场景的运行过程中,在所述虚拟场景中显示多个不同尺寸的按钮,其中,所述按钮均与所述虚拟场景中的交互功能关联;所述处理模块4552还配置为响应于针对在所述虚拟场景中显示的多个不同尺寸的按钮的触控操作,获取所述触控操作针对所述按钮对应的触达面积。
在一些实施例中,所述显示模块4551还配置为显示虚拟场景,并显示独立于所述虚拟场景的适配检测区域,其中,所述适配检测区域中包括与所述虚拟场景中的交互功能无关的多个不同尺寸的按钮;所述处理模块4552还配置为响应于针对所述适配检测区域中多个不同尺寸的按钮的触控操作,暂停或继续所述虚拟场景中的交互过程,并获取所述触控操作对应的触达面积。
在一些实施例中,所述虚拟场景的适配显示装置455还包括:设置模块4554,配置为在所述虚拟场景中显示按钮适配检测入口;响应于针对所述按钮适配检测入口的触发操作,确定将执行显示独立于所述虚拟场景的所述适配检测区域的操作。
在一些实施例中,所述设置模块4554还配置为获取所述虚拟场景中包括的按钮被误触的频率;当所述频率大于频率阈值时,确定将执行在所述虚拟场景中显示所述按钮适配检测入口的操作。
在一些实施例中,所述触达面积的类型包括最小触达面积;所述处理模块4552还配置为按照所述多个不同尺寸的按钮的尺寸从大到小的顺序,在所述适配检测区域中依次显示所述多个不同尺寸的按钮;响应于依次针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的最小触达面积。
在一些实施例中,所述处理模块4552还配置为针对所述多个不同尺寸的按钮中的任一尺寸的按钮执行以下处理:当所述任一尺寸的按钮被误触的次数大于误触阈值时,将所述多个尺寸中满足预设条件的尺寸确定为所述触控操作对应的最小触达面积;其中,所述预设条件包括:与所述任一尺寸相邻且大于所述任一尺寸。
在一些实施例中,所述触达面积的类型包括最大触达面积;所述处理模块4552还配置为获取施加在所述按钮的指纹,并将所述指纹的面积作为所述触控操作对应的最大触达面积。
在一些实施例中,所述处理模块4552还配置为获取多次按压所述按钮时对应的多个指纹,将所述多个指纹中最大面积的指纹作为施加在所述按钮的指纹。
在一些实施例中,所述处理模块4552还配置为获取多次按压所述按钮时对应的多个指纹;针对所述多个指纹中的任一指纹执行以下处理:当所述指纹的面积在按压过程中保持不变的时间超出时间阈值时,将所述指纹作为施加在所述按钮的指纹。
在一些实施例中,所述更新模块4553还配置为调整所述虚拟场景中包括的按钮的尺寸,其中,调整后的所述按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间。
在一些实施例中,所述更新模块4553还配置为获取所述虚拟场景中包括的按钮被误触的频率,当所述频率大于频率阈值时,确定将执行调整所述虚拟场景中包括的按钮的尺寸的操作。
在一些实施例中,所述更新模块4553还配置为获取所述虚拟场景中包括的第一按钮以及第二按钮,其中,所述第一按钮的尺寸对应的面积小于所述最小触达面积,所述第二按钮的尺寸对应的面积大于所述最大触达面积;调大所述第一按钮的尺寸,其中,调大后的所述第一按钮的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间;调小所述第二按钮的尺寸,其中,调小后的所述第二按钮的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间。
在一些实施例中,更新模块4553还配置为获取针对所述虚拟场景中包括的按钮的缩放比例,其中,所述缩放比例用于调整所述虚拟场景中包括的所有按钮的尺寸,调整后的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间;按照所述缩放比例调整所述虚拟场景中包括的按钮的尺寸。
在一些实施例中,更新模块4553还配置为当所述缩放比例为多个缩放比例值时,显示缩放比例选择区域,其中,所述缩放比例选择区域中包括所述多个缩放比例值;响应于针对所述缩放比例选择区域中包括的多个缩放比例值的选择操作,按照被选中的所述缩放比例值调整所述虚拟场景中包括的按钮的尺寸。
综上,本申请实施例提供的虚拟场景的适配显示装置具有以下有益效果:通过虚拟场景中的多个不同尺寸的按钮,检测触控主体能够实现的触达面积,调整虚拟场景中包括的按钮的尺寸以与触控主体能够实现的触达面积相适配,从而能够以高效率的人机交互操作调整按钮的尺寸,提高虚拟场景中人机交互的效率,同时图形处理硬件进行人机交互的相关计算的资源消耗得以显著节约。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景的适配显示方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景的适配显示方法,例如,如图4A-图4C示出的虚拟场景的适配显示方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式, 按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (19)

  1. 一种虚拟场景的适配显示方法,由电子设备执行,所述方法包括:
    显示虚拟场景以及多个不同尺寸的按钮;
    响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积;
    更新显示所述虚拟场景,其中,更新后的所述虚拟场景中包括的按钮的尺寸与所述触控操作对应的触达面积相适配。
  2. 根据权利要求1所述的方法,其中,
    所述显示虚拟场景以及多个不同尺寸的按钮,包括:
    在所述虚拟场景的运行过程中,在所述虚拟场景中显示多个不同尺寸的按钮,其中,所述按钮均与所述虚拟场景中的交互功能关联;
    所述响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积,包括:
    响应于针对在所述虚拟场景中显示的多个不同尺寸的按钮的触控操作,获取所述触控操作针对所述按钮的触达面积。
  3. 根据权利要求1所述的方法,其中,
    所述显示虚拟场景以及多个不同尺寸的按钮,包括:
    显示虚拟场景,并显示独立于所述虚拟场景的适配检测区域,其中,所述适配检测区域中包括与所述虚拟场景中的交互功能无关的多个不同尺寸的按钮;
    所述响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积,包括:
    响应于针对所述适配检测区域中多个不同尺寸的按钮的触控操作,暂停或继续所述虚拟场景中的交互过程,并获取所述触控操作对应的触达面积。
  4. 根据权利要求3所述的方法,其中,在所述显示独立于所述虚拟场景的适配检测区域之前,所述方法还包括:
    在所述虚拟场景中显示按钮适配检测入口;
    响应于针对所述按钮适配检测入口的触发操作,确定将执行显示独立于所述虚拟场景的所述适配检测区域的操作。
  5. 根据权利要求4所述的方法,其中,在所述虚拟场景中显示按钮适配检测入口之前,所述方法还包括:
    获取所述虚拟场景中包括的按钮被误触的频率;
    当所述频率大于频率阈值时,确定将执行在所述虚拟场景中显示所述按钮适配检测入口的操作。
  6. 根据权利要求3所述的方法,其中,
    所述触达面积的类型包括最小触达面积;
    在所述获取所述触控操作对应的触达面积之前,所述方法还包括:
    按照所述多个不同尺寸的按钮的尺寸从大到小的顺序,在所述适配检测区域中依次显示所述多个不同尺寸的按钮;
    所述响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积,包括:
    响应于依次针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的最小触达面积。
  7. 根据权利要求6所述的方法,其中,所述获取所述触控操作对应的最小触达面积,包括:
    针对所述多个不同尺寸的按钮中的任一尺寸的按钮执行以下处理:
    当所述任一尺寸的按钮被误触的次数大于误触阈值时,将所述多个尺寸中满足预设条件的尺寸确定为所述触控操作对应的最小触达面积;
    其中,所述预设条件包括:与所述任一尺寸相邻且大于所述任一尺寸。
  8. 根据权利要求1所述的方法,其中,
    所述触达面积的类型包括最大触达面积;
    所述获取所述触控操作对应的触达面积,包括:
    获取施加在所述按钮的指纹,并将所述指纹的面积作为所述触控操作对应的最大触达面积。
  9. 根据权利要求8所述的方法,其中,所述获取施加在所述按钮的指纹,包括:
    获取多次按压所述按钮时对应的多个指纹,将所述多个指纹中最大面积的指纹作为施加在所述按钮的指纹。
  10. 根据权利要求8所述的方法,其中,所述获取施加在所述按钮的指纹,包括:
    获取多次按压所述按钮时对应的多个指纹;
    针对所述多个指纹中的任一指纹执行以下处理:
    当所述指纹的面积在按压过程中保持不变的时间超出时间阈值时,将所述指纹作为施加在所述按钮的指纹。
  11. 根据权利要求1所述的方法,其中,所述更新显示所述虚拟场景,包括:
    调整所述虚拟场景中包括的按钮的尺寸,其中,调整后的所述按钮的尺寸对应的面积位于最小触达面积以及最大触达面积之间。
  12. 根据权利要求11所述的方法,其中,在所述调整所述虚拟场景中包括的按钮的尺寸之前,所述方法还包括:
    获取所述虚拟场景中包括的按钮被误触的频率;
    当所述频率大于频率阈值时,确定将执行调整所述虚拟场景中包括的按钮的尺寸的操作。
  13. 根据权利要求11所述的方法,其中,所述调整所述虚拟场景中包括的按钮的尺寸,包括:
    获取所述虚拟场景中包括的第一按钮以及第二按钮,其中,所述第一按钮的尺寸对应的面积小于所述最小触达面积,所述第二按钮的尺寸对应的面积大于所述最大触达面积;
    调大所述第一按钮的尺寸,其中,调大后的所述第一按钮的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间;
    调小所述第二按钮的尺寸,其中,调小后的所述第二按钮的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间。
  14. 根据权利要求11所述的方法,其中,调整所述虚拟场景中包括的按钮的尺寸,包括:
    获取针对所述虚拟场景中包括的按钮的缩放比例,其中,所述缩放比例用于调整所述虚拟场景中包括的所有按钮的尺寸,调整后的尺寸对应的面积位于所述最小触达面积以及所述最大触达面积之间;
    按照所述缩放比例调整所述虚拟场景中包括的按钮的尺寸。
  15. 根据权利要求14所述的方法,其中,当所述缩放比例为多个缩放比例值时,所述按照所述缩放比例调整所述虚拟场景中包括的按钮的尺寸,包括:
    显示缩放比例选择区域,其中,所述缩放比例选择区域中包括所述多个缩放比例值;
    响应于针对所述缩放比例选择区域中包括的多个缩放比例值的选择操作,按照被选 中的所述缩放比例值调整所述虚拟场景中包括的按钮的尺寸。
  16. 一种虚拟场景的适配显示装置,所述装置包括:
    显示模块,配置为显示虚拟场景以及多个不同尺寸的按钮;
    处理模块,配置为响应于针对所述多个不同尺寸的按钮的触控操作,获取所述触控操作对应的触达面积;
    更新模块,配置为更新显示所述虚拟场景,其中,更新后的所述虚拟场景中包括的按钮的尺寸与所述触控操作对应的触达面积相适配。
  17. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于通过执行所述存储器中存储的可执行指令,实现权利要求1至15任一项所述的虚拟场景的适配显示方法。
  18. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至15任一项所述的虚拟场景的适配显示方法。
  19. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现权利要求1至15任一项所述的虚拟场景的适配显示方法。
PCT/CN2021/125374 2020-12-31 2021-10-21 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品 WO2022142626A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020227031607A KR20220130257A (ko) 2020-12-31 2021-10-21 가상 장면을 위한 적응형 디스플레이 방법 및 장치, 전자 디바이스, 저장 매체 및 컴퓨터 프로그램 제품
JP2022556518A JP7447299B2 (ja) 2020-12-31 2021-10-21 仮想シーンのアダプティブ表示方法及び装置、電子機器、並びにコンピュータプログラム
US17/856,449 US11995311B2 (en) 2020-12-31 2022-07-01 Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011620155.7 2020-12-31
CN202011620155.7A CN112684970B (zh) 2020-12-31 2020-12-31 虚拟场景的适配显示方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/856,449 Continuation US11995311B2 (en) 2020-12-31 2022-07-01 Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
WO2022142626A1 true WO2022142626A1 (zh) 2022-07-07

Family

ID=75453820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125374 WO2022142626A1 (zh) 2020-12-31 2021-10-21 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品

Country Status (5)

Country Link
JP (1) JP7447299B2 (zh)
KR (1) KR20220130257A (zh)
CN (1) CN112684970B (zh)
TW (1) TWI818343B (zh)
WO (1) WO2022142626A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684970B (zh) * 2020-12-31 2022-11-29 腾讯科技(深圳)有限公司 虚拟场景的适配显示方法、装置、电子设备及存储介质
CN113426099B (zh) * 2021-07-07 2024-03-15 网易(杭州)网络有限公司 一种游戏中的显示控制方法及装置
CN114489882B (zh) * 2021-12-16 2023-05-19 成都鲁易科技有限公司 浏览器动态皮肤的实现方法及装置、存储介质
CN115017629B (zh) * 2022-02-16 2024-04-12 中国标准化研究院 按钮尺寸感知阈限的测试装置和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180239582A1 (en) * 2017-02-23 2018-08-23 Frank Goss, III Mobile Virtual Assistant Device
CN109753327A (zh) * 2018-11-19 2019-05-14 努比亚技术有限公司 一种控件布局方法、终端及计算机可读存储介质
CN111632375A (zh) * 2020-05-28 2020-09-08 腾讯科技(深圳)有限公司 交互对象的显示方法和装置、存储介质及电子装置
CN112090067A (zh) * 2020-09-23 2020-12-18 腾讯科技(深圳)有限公司 虚拟载具的控制方法、装置、设备及计算机可读存储介质
CN112684970A (zh) * 2020-12-31 2021-04-20 腾讯科技(深圳)有限公司 虚拟场景的适配显示方法、装置、电子设备及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293175A (ja) 2004-03-31 2005-10-20 Fujitsu Ten Ltd ユーザインタフェイス自動調整方法及び装置
JP2006268313A (ja) 2005-03-23 2006-10-05 Fuji Xerox Co Ltd 表示制御装置およびその表示内容の配置方法
US8564544B2 (en) * 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US8704775B2 (en) 2008-11-11 2014-04-22 Adobe Systems Incorporated Biometric adjustments for touchscreens
JP5177158B2 (ja) * 2010-03-19 2013-04-03 パナソニック株式会社 入力装置、入力ボタン表示方法及び入力ボタン表示プログラム
US20150160779A1 (en) * 2013-12-09 2015-06-11 Microsoft Corporation Controlling interactions based on touch screen contact area
US9227141B2 (en) * 2013-12-31 2016-01-05 Microsoft Technology Licensing, Llc Touch screen game controller
CN105930077A (zh) * 2016-04-12 2016-09-07 广东欧珀移动通信有限公司 屏幕显示对象的尺寸调整方法及装置
CN106951174B (zh) * 2017-03-22 2019-01-22 维沃移动通信有限公司 一种虚拟键盘的调整方法及移动终端
CN107203313B (zh) * 2017-05-24 2020-04-17 维沃移动通信有限公司 调整桌面显示对象方法、移动终端及计算机可读存储介质
CN107748638A (zh) * 2017-09-28 2018-03-02 努比亚技术有限公司 一种区域调整方法、终端及计算机可读存储介质
CN107890664A (zh) * 2017-10-23 2018-04-10 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备
CN109062496B (zh) * 2018-08-09 2021-05-25 北京金山安全软件有限公司 操作区域的调整方法、装置、移动终端及存储介质
CN109499061B (zh) * 2018-11-19 2022-08-09 网易(杭州)网络有限公司 游戏场景画面的调整方法、装置、移动终端和存储介质
CN111324276B (zh) * 2018-12-17 2021-04-27 珠海格力电器股份有限公司 虚拟键盘的控制方法、装置和终端
CN111338540B (zh) * 2020-02-11 2022-02-18 Oppo广东移动通信有限公司 图片文本处理方法、装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180239582A1 (en) * 2017-02-23 2018-08-23 Frank Goss, III Mobile Virtual Assistant Device
CN109753327A (zh) * 2018-11-19 2019-05-14 努比亚技术有限公司 一种控件布局方法、终端及计算机可读存储介质
CN111632375A (zh) * 2020-05-28 2020-09-08 腾讯科技(深圳)有限公司 交互对象的显示方法和装置、存储介质及电子装置
CN112090067A (zh) * 2020-09-23 2020-12-18 腾讯科技(深圳)有限公司 虚拟载具的控制方法、装置、设备及计算机可读存储介质
CN112684970A (zh) * 2020-12-31 2021-04-20 腾讯科技(深圳)有限公司 虚拟场景的适配显示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
TWI818343B (zh) 2023-10-11
CN112684970B (zh) 2022-11-29
JP2023524368A (ja) 2023-06-12
JP7447299B2 (ja) 2024-03-11
CN112684970A (zh) 2021-04-20
TW202227172A (zh) 2022-07-16
US20220334716A1 (en) 2022-10-20
KR20220130257A (ko) 2022-09-26

Similar Documents

Publication Publication Date Title
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
CN112691377B (zh) 虚拟角色的控制方法、装置、电子设备及存储介质
US20220297004A1 (en) Method and apparatus for controlling virtual object, device, storage medium, and program product
US20230330536A1 (en) Object control method and apparatus for virtual scene, electronic device, computer program product, and computer-readable storage medium
CN109529340B (zh) 虚拟对象控制方法、装置、电子设备及存储介质
CN111643890A (zh) 卡牌游戏的交互方法、装置、电子设备及存储介质
WO2023231664A1 (zh) 一种与车载显示设备交互的方法、装置、设备、存储介质和计算机程序产品
US20230330525A1 (en) Motion processing method and apparatus in virtual scene, device, storage medium, and program product
WO2021244237A1 (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
CN114344896A (zh) 基于虚拟场景的合拍处理方法、装置、设备及存储介质
US20230310989A1 (en) Object control method and apparatus in virtual scene, terminal device, computer-readable storage medium, and computer program product
US20230350554A1 (en) Position marking method, apparatus, and device in virtual scene, storage medium, and program product
CN113018862B (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
US11995311B2 (en) Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品
WO2024037139A1 (zh) 虚拟场景中的信息提示方法、装置、电子设备、存储介质及程序产品
WO2024060888A1 (zh) 虚拟场景的交互处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2023226569A9 (zh) 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
CN114247132B (zh) 虚拟对象的控制处理方法、装置、设备、介质及程序产品
WO2023168990A1 (zh) 虚拟场景中的表演录制方法、装置、设备、存储介质及程序产品
WO2024051398A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024012016A1 (zh) 虚拟场景的信息显示方法、装置、电子设备、存储介质及计算机程序产品
CN117764758A (zh) 用于虚拟场景的群组建立方法、装置、设备及存储介质
CN115811623A (zh) 基于虚拟形象的直播方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913371

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227031607

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022556518

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2023)