CN116954351A - Control processing method, device, electronic equipment, storage medium and program product - Google Patents

Control processing method, device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN116954351A
CN116954351A CN202210403731.5A CN202210403731A CN116954351A CN 116954351 A CN116954351 A CN 116954351A CN 202210403731 A CN202210403731 A CN 202210403731A CN 116954351 A CN116954351 A CN 116954351A
Authority
CN
China
Prior art keywords
control
target
displaying
interaction interface
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210403731.5A
Other languages
Chinese (zh)
Inventor
崔维健
田聪
谢洁琪
刘博艺
邹聃成
邓昱
于清波
黎智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210403731.5A priority Critical patent/CN116954351A/en
Publication of CN116954351A publication Critical patent/CN116954351A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a control processing method, a device, electronic equipment, a computer readable storage medium and a computer program product of a virtual scene; the method comprises the following steps: displaying at least one control applied to a virtual scene in a human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene; displaying a corresponding selected mark aiming at a selected target control in the at least one control; responding to the operation on a human-computer interaction interface, and displaying a new control for replacing the target control at a contact position of the operation and the human-computer interaction interface; and the response area of the new control is matched with the contact area of the operation in the man-machine interaction interface, and the function associated with the new control is the same as the function of the target control replaced by the new control. By the method and the device, the man-machine interaction efficiency of control configuration can be improved.

Description

Control processing method, device, electronic equipment, storage medium and program product
Technical Field
The present application relates to man-machine interaction technology, and in particular, to a control processing method and apparatus for a virtual scene, an electronic device, a computer readable storage medium, and a computer program product.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the multimedia technology of virtual scenes, can realize diversified interactions among virtual objects controlled by users or artificial intelligence according to the actual application requirements by means of the man-machine interaction engine technology, has various typical application scenes, for example, in virtual scenes such as games and the like, and can simulate the actual fight process among the virtual objects.
In order to meet the use requirements of different users, the layout positions and response areas of the controls need to be configured through custom operation. In the related art, a player can adjust the response area of the control through quantitative configuration operation or adjust the layout position of the control in an up-and-down moving manner. However, in the related art, a player generally needs to enter a virtual scene to experience the triggering experience of the configured control piece, so that the control configuration difficulty is increased and the man-machine interaction efficiency is reduced.
Disclosure of Invention
The embodiment of the application provides a control processing method, a device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, which can improve the man-machine interaction efficiency of control configuration.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control processing method of a virtual scene, which comprises the following steps:
displaying at least one control applied to a virtual scene in a human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene;
displaying a corresponding selected mark aiming at a selected target control in the at least one control;
responding to an operation on a human-computer interaction interface, and displaying a new control for replacing the target control at a contact position of the operation and the human-computer interaction interface;
and the response area of the new control is matched with the contact area of the operation in the man-machine interaction interface, and the function associated with the new control is the same as the function of the target control replaced by the new control.
The embodiment of the application provides a control processing device of a virtual scene, which comprises the following components:
the display module is used for displaying at least one control applied to the virtual scene in the human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene;
the selection module is used for displaying a corresponding selection mark aiming at a selected target control in the at least one control;
The replacing module is used for responding to the operation on the man-machine interaction interface, and displaying a new control for replacing the target control at the contact position of the operation and the man-machine interaction interface; and the response area of the new control is matched with the contact area of the operation in the man-machine interaction interface, and the function associated with the new control is the same as the function of the target control replaced by the new control.
In the above solution, the display module is further configured to: displaying a control configuration interface in the man-machine interaction interface; displaying the at least one control in the control configuration interface; the control configuration interface is used for displaying before the virtual scene is operated or displaying when the virtual scene is in a pause state.
In the above solution, the display module is further configured to: displaying the virtual scene in the man-machine interaction interface; displaying at least one control applied to the virtual scene in a picture of the virtual scene; wherein the virtual scene is in an operating state or a pause state.
In the above solution, the display module is further configured to: displaying a first control applied to the virtual scene, wherein the first control is a control which is not adjusted in a first running time of the virtual scene; or displaying a second control applied to the virtual scene, wherein the second control is a control adjusted in a second running time of the virtual scene.
In the above solution, the display module is further configured to: acquiring configuration data of a current account number logged in the virtual scene for each control, wherein the configuration data comprises configuration frequency of the control and configuration parameters of each configuration; the following processing is performed by the first neural network model: extracting configuration features from the configuration data, and mapping the configuration features to first probabilities of each control; and performing descending order sorting processing on all the controls based on the first probability, and displaying a plurality of controls with top sorting in the man-machine interaction interface.
In the above solution, the selecting module is further configured to: responsive to a selection operation for the at least one control, a corresponding selected marker is displayed for at least one target control selected from the at least one control.
In the above solution, when the number of the at least one control is a plurality of controls, the selecting module is further configured to: aiming at a plurality of controls, sorting processing based on specific dimensions is carried out, the controls with the front sorting are used as selected target controls, and corresponding selected marks are displayed; wherein the particular dimensions include: the control system comprises a latest use time, a use frequency, a configuration frequency, response sensitivity and response score rate, wherein the response sensitivity is used for representing the probability that the control is successfully triggered, and the response score rate is used for representing the probability that the control is successfully triggered to generate a score.
In the above solution, the selecting module is further configured to: acquiring historical operation data and a historical interaction result aiming at the at least one control in the virtual scene; the following processing is performed by the second neural network model: extracting operation characteristics from the historical operation data, and extracting result characteristics from the historical interaction results; performing fusion processing on the operation characteristics and the result characteristics to obtain first fusion characteristics; mapping the first fusion characteristic to a second probability that each control does not conform to the operation habit of the account; and performing descending order sorting processing on the plurality of controls based on the second probability, taking the plurality of controls with the top order as selected target controls, and displaying corresponding selected marks.
In the above scheme, the man-machine interaction interface comprises an intelligent recognition control, wherein the intelligent recognition control is used for switching between an open state and a closed state when triggered, and the open state represents an open intelligent recognition mode; before displaying a new control for replacing the target control at a contact position of the operation and the man-machine interaction interface in response to the operation for the man-machine interaction interface, the replacing module is further configured to: in response to a triggering operation for the intelligent recognition control, displaying that the intelligent recognition control is in the open state, performing at least one of the following: displaying prompt information, wherein the prompt information is used for prompting implementation of the operation; displaying a selected mark of the intelligent recognition control; and displaying that at least one of the position control and the style control in the man-machine interaction interface is in a non-triggerable state.
In the above solution, after the contact position between the operation and the human-computer interaction interface displays a new control for replacing the target control, the replacing module is further configured to: hiding the intelligent identification control; displaying a return control and a completion control; storing the new control in response to a triggering operation for the completion control; and responding to the triggering operation for the return control, hiding the new control and restoring and displaying the target control.
In the above scheme, the man-machine interaction interface comprises an intelligent recognition control, the intelligent recognition control is used for switching between an open state and a closed state when triggered, and the closed state represents a closed intelligent recognition mode; after displaying the corresponding selected mark, the replacing module is further configured to: responding to triggering operation for the intelligent recognition control, and displaying that the intelligent recognition control is in the closed state; moving the target control in the human-computer interaction interface in response to a position movement operation for a position control included in the human-computer interaction interface; and responding to a style adjustment operation for the style control included in the man-machine interaction interface, and adjusting the display style of the target control based on the style adjustment operation.
In the above scheme, the number of the target controls is a plurality of and is selected in batches, the operation is a batch touch operation in the man-machine interaction interface, and the number of the touch operations is the same as the number of the target controls; the replacement module is further configured to: in response to each of the touch operations for the human-machine interaction interface, performing the following processing: displaying a plurality of selection controls, wherein the plurality of selection controls are in one-to-one correspondence with a plurality of displayed target controls; and responding to triggering operation of any one of the selection controls, and displaying the new control for replacing the target control to be replaced at the contact position, wherein the target control to be replaced is the target control corresponding to the currently triggered selection control.
In the above scheme, the number of the target controls is a plurality of and is selected in batches according to the sequence, the operation is a batch touch operation in the man-machine interaction interface, and the number of the touch operations is the same as the number of the target controls; the replacement module is further configured to: in response to each of the touch operations for the human-machine interaction interface, performing the following processing: displaying the new control for replacing the target control to be replaced at the contact position of the touch operation and the man-machine interaction interface; wherein the target control to be replaced is the first target control not replaced in the sequence.
In the above scheme, the number of the target controls is a plurality of and selected in batches, the operation is a multi-point touch operation in the man-machine interaction interface, and the number of the contacts of the multi-point touch operation is the same as the number of the target controls; the replacement module is further configured to: and displaying a plurality of new controls in a plurality of contact positions corresponding to the contacts of the multi-point touch operation one by one, wherein the new controls in each contact position are used for replacing target controls corresponding to the contact positions.
In the above solution, the replacing module is further configured to: for each of the target controls, performing the following: displaying a new control for replacing the target control in a contact position closest to an original position among the contact positions; wherein the home position is a display position of the target control prior to responding to the operation.
In the above solution, before the displaying the plurality of new controls, the replacing module is further configured to: acquiring functional data of a plurality of target controls and position data of a plurality of contact positions; performing the following processing for each target control through a third neural network model: extracting functional features from the functional data and extracting position features from the position data; performing fusion processing on the functional features and the position features to obtain second fusion features; mapping the second fusion feature to a third probability that a new control corresponding to the target control is at each contact position; and carrying out linear programming processing based on the third probability that the new control of each target control is positioned at each contact position, so as to obtain the contact position of the new control for replacing each target control.
In the above scheme, the opacity of the new control is positively correlated with an operation parameter corresponding to the operation of the new control, wherein the operation parameter includes at least one of the following: and the contact time of the operation and the contact force of the operation.
In the above solution, the replacing module is further configured to: displaying a new control with the target opacity in a contact position of the operation and the man-machine interaction interface; or displaying a new control from zero opacity to target opacity in a contact position of the operation and the man-machine interaction interface.
In the above solution, when in the process of displaying a new control from zero opacity to target opacity, the replacing module is further configured to: any one of the following processes is performed: displaying a process of changing the target control from original opacity to zero opacity; displaying the target control based on the original opacity until the new control is at the target opacity, and hiding the target control; hiding the target control from the opacity of the new control being non-zero.
In the above solution, before the contact position between the operation and the man-machine interaction interface displays a new control for replacing the target control, the replacing module is further configured to: generating an ellipse by taking the longest wheelbase of the contact area of the operation and the man-machine interaction interface as a long radius and the shortest wheelbase of the contact area of the operation and the man-machine interaction interface as a short radius; the ellipse is determined as the response area of the new control.
In the above scheme, the target control is associated with a plurality of functions, and different functions correspond to different trigger pressures; the replacement module is further configured to: responding to the operation of the man-machine interaction interface, and displaying a plurality of functions to be matched with the pressure of the operation; in response to a selection operation for a target function of the plurality of functions, a pressure of the operation is identified as a trigger pressure of the target function.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control processing method of the virtual scene provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for realizing the control processing method of the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or an instruction, wherein the computer program or the instruction realizes the control processing method of the virtual scene provided by the embodiment of the application when being executed by a processor.
According to the embodiment of the application, aiming at the selected target control, the new control for replacing the target control is displayed at the contact position of the operation and the human-computer interaction interface in response to the operation aiming at the human-computer interaction interface, the response area of the new control is matched with the contact area of the operation in the human-computer interaction interface, which is equivalent to the fact that the response area and the layout position of the new control are identified directly through the operation, the configuration efficiency aiming at the control is improved, and the response area matched with the contact area can effectively respond to the triggering operation aiming at the control subsequently because the contact area is directly generated by the operation, so that the configuration accuracy aiming at the control is improved.
Drawings
FIG. 1 is an interface schematic diagram of a control processing method of a virtual scene in the related art;
fig. 2 is an application mode schematic diagram of a control processing method of a virtual scene according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to a control processing method of an application virtual scene according to an embodiment of the present application;
fig. 4A to 4C are schematic flow diagrams of a control processing method of a virtual scene according to an embodiment of the present application;
FIGS. 5A-5B are interface diagrams of a control processing method for a virtual scene according to an embodiment of the present application;
FIG. 6 is an interface schematic diagram of a control processing method for a virtual scene according to an embodiment of the present application;
fig. 7 is an interface schematic diagram of a control processing method of a virtual scene according to an embodiment of the present application;
8A-8E are interface schematic diagrams of a control processing method of a virtual scene provided by an embodiment of the application;
FIG. 9 is a new control generation schematic diagram of a control processing method of a virtual scene provided by an embodiment of the application;
FIGS. 10A-10B are interface diagrams of a control processing method for a virtual scene according to an embodiment of the present application;
fig. 11 is a flowchart illustrating a control processing method of a virtual scene according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely distinguishing between similar objects and not representing a particular ordering of objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, where permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing the embodiments only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Virtual scenes, namely, a scene which is output by equipment and is different from the real world is utilized, visual perception of the virtual scenes can be formed through naked eyes or the assistance of equipment, for example, a two-dimensional image output by a display screen is a three-dimensional image output by three-dimensional display technologies such as three-dimensional projection, virtual reality and augmented reality technologies; in addition, various simulated real world sensations such as auditory sensations, tactile sensations, olfactory sensations, and motion sensations can also be formed by various possible hardware.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no restriction of the execution order of the plurality of operations performed.
3) And a client, an application program for providing various services, such as a game client, etc., running in the terminal.
4) Cloud storage is a new concept which extends and develops in the concept of cloud computing, and refers to a system which integrates a large number of storage devices of different types in a network through application software to cooperatively work and jointly provides data storage and service access functions for the outside through functions of cluster application, grid technology or a distributed file system and the like. When the core of the operation and the processing of the cloud computing system is the storage and the management of a large amount of data, a large amount of storage devices need to be configured in the cloud computing system, and then the cloud computing system is converted into a cloud storage system, so that the cloud storage is a cloud computing system with the data storage and the management as the core.
Referring to fig. 1, fig. 1 is an interface schematic diagram of a parameter policy in a related art, a user-defined key layout setting interface is displayed in a human-computer interaction interface 301, a control transparency adjustment progress bar 303, a control size adjustment progress bar 304, and a position adjustment control 305 are displayed in a top operation field 302 of the human-computer interaction interface 301, the size of a selected control 306 can be adjusted in response to an adjustment operation for the control size adjustment progress bar 304, the transparency of the selected control 306 can be adjusted in response to an adjustment operation for the control transparency adjustment progress bar 303, and the position of the selected control 306 can be adjusted in response to an adjustment operation for the position adjustment control 305.
In a custom key position layout setting interface of the related technology, the size of the control is required to be adjusted by dragging an adjustment degree bar, and the control cannot be accurately and rapidly adjusted to a certain size; the layout position of the control is adjusted to drag the button icon, and though fine adjustment of four directions of up, down, left and right is supported, frequent clicking is needed, so that the adjustment time is long.
The embodiment of the application provides a control processing method, a device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, wherein a control matched with a contact area can be directly generated at the contact position by identifying the contact area between an operation and an interface, the adjustment efficiency of the layout position and a response area of the control is effectively improved, and the response accuracy of the control can be improved because the response area of a new control is more fit with the actual operation. The following describes exemplary applications of the electronic device provided by the embodiments of the present application, where the electronic device provided by the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated information-eliminating device, a portable game device), and so on.
In order to facilitate easier understanding of the control processing method for a virtual scene provided by the embodiment of the present application, an exemplary implementation scenario of the control processing method for a virtual scene provided by the embodiment of the present application is first described, where the virtual scene may be output based on a terminal completely or based on cooperation of the terminal and a server.
In some embodiments, the virtual scene may be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the virtual objects, so that the user can relax life pressure in the game process.
In another implementation scenario, referring to fig. 2, fig. 2 is a schematic application mode diagram of a control processing method of a virtual scenario, which is applied to a terminal 400 and a server 200, and is generally suitable for an application mode that depends on a computing capability of the server 200 to complete virtual scenario calculation and output a virtual scenario at the terminal 400.
As an example, the account logs in to a client (e.g., a game application of a web version) running by the terminal 400, the client responds to a selection operation of the account for the control, the client displays the selected target control, the client responds to an operation of the account for the human-computer interaction interface, the client sends a contact position and a contact area of the operation to the server 200 through the network 300, the server 200 calculates display data of a new control based on the contact position and the contact area, and sends the display data to the client, the new control is displayed based on the contact position of the display data on the human-computer interaction interface of the client, and a response area of the new control is matched with the contact area.
As an example, the account logs in to a client (e.g., a game application of a web version) running by the terminal 400, the client responds to a selection operation of the account for the control, the selected target control is displayed, the client responds to an operation of the account for the human-computer interaction interface, display data of a new control is calculated based on a contact position and a contact area of the operation, the new control is displayed based on the contact position of the display data on the human-computer interaction interface of the client, and a response area of the new control is matched with the contact area.
In some embodiments, the terminal 400 may implement the control processing method of the virtual scene provided by the embodiment of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; it may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP (i.e. the client described above), a live APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiment of the application can be realized by means of Cloud Technology (Cloud Technology), wherein the Cloud Technology refers to a hosting Technology for integrating serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal 400 and the server 200 may be directly or indirectly connected through a wired or wireless communication manner, which is not limited in the embodiment of the present application.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device of a control processing method for applying a virtual scene according to an embodiment of the present application, and the electronic device is taken as an example to describe the electronic device, and a terminal 400 shown in fig. 3 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 4 30. The various components in terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 3 as bus system 440.
The processor 410 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose processor may be a microprocessor or any conventional processor, etc.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices that are physically located remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 450 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
network communication module 452 for accessing other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 43 1 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the control processing device for a virtual scene provided by the embodiments of the present application may be implemented in a software manner, and fig. 3 shows a control processing device 45 for a virtual scene stored in a memory 450, which may be software in the form of a program, a plug-in, or the like, and includes the following software modules: the display module 4551, the selection module 4552 and the replacement module 4553 are logical, so that any combination or further splitting may be performed according to the implemented functions, and the functions of the respective modules will be described below.
The control processing method of the virtual scene provided by the embodiment of the application will be described in combination with the exemplary application and implementation of the terminal provided by the embodiment of the application.
Referring to fig. 4A, fig. 4A is a flowchart of a control processing method of a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4A.
In step 101, at least one control applied in a virtual scene is displayed in a human-machine interaction interface.
As an example, the controls applied to the virtual scene may be one or more, different controls being associated with different functions in the virtual scene, e.g., a shooting control that performs a shooting action upon being triggered, a squatting control that performs a squatting action upon being triggered.
In step 102, for a selected target control of the at least one control, a corresponding selected mark is displayed.
In step 103, in response to the operation for the human-computer interaction interface, a new control for replacing the target control is displayed at a contact position of the operation with the human-computer interaction interface.
As an example, the operation for the human-computer interaction interface is a touch operation, for example, a single touch operation, for example, a multiple touch operation, or a multi-touch operation, an implementation object of the touch operation may be a finger, a toe, or a sensing pen, a response area of the new control matches a contact area operated in the human-computer interaction interface, a function associated with the new control is the same as a function of a target control replaced by the new control, for example, the response area of the new control is the same as the contact area of the human-computer interaction interface, or the response area of the new control is larger than the contact area of the human-computer interaction interface, that is, the response area of the new control at least partially overlaps the contact area of the human-computer interaction interface.
In some embodiments, displaying at least one control applied to the virtual scene in the human-computer interaction interface in step 101 may be achieved by the following technical scheme: displaying a control configuration interface in the man-machine interaction interface; displaying at least one control applied to the virtual scene in a control configuration interface; the control configuration interface is used for displaying before running the virtual scene or displaying when the virtual scene is in a pause state.
As an example, referring to fig. 5A, fig. 5A is an interface schematic diagram of a control processing method of a virtual scene provided by an embodiment of the present application, in which a user-defined key layout setting interface, that is, a control configuration interface, is displayed in a human-computer interaction interface 501A, and a plurality of controls 50 a related to each function in the virtual scene, for example, buttons related to combat are displayed in the control configuration interface, including a left-right shooting button, a mirror opening button, a squat button, a bullet changing button, a left-right probe button, and so on. The control configuration interface is displayed in response to the trigger operation for the configuration interface entry, the trigger operation for the configuration interface entry can occur before the virtual scene operation or in the virtual scene operation process, and if the control configuration interface is in the virtual scene operation process, the control configuration interface is in a pause state in response to the trigger operation for the configuration interface entry. According to the embodiment of the application, the control can be configured in a mode of operating independently of the virtual scene, so that conflict between control configuration and control use is avoided, and the man-machine interaction efficiency of the control in subsequent configuration is improved.
In some embodiments, displaying at least one control applied to the virtual scene in the human-computer interaction interface in step 101 may be achieved by the following technical scheme: displaying a virtual scene in a human-computer interaction interface; displaying at least one control applied to the virtual scene in a picture of the virtual scene; wherein the virtual scene is in a running state (e.g., network game) or a paused state (e.g., stand-alone game).
As an example, referring to fig. 5B, fig. 5B is an interface schematic diagram of a control processing method of a virtual scene provided by an embodiment of the present application, in which the virtual scene is displayed in a human-computer interaction interface 501B, and at least one control 502B applied to the virtual scene is displayed in a screen of the virtual scene, for example, a combat related button, including a left and right shooting button, a mirror opening button, a squat button, a bullet changing button, a left and right probe button, and so on. When the virtual scene is the virtual scene of the single-machine game, the virtual scene can display the editable control in the running state or the pause state, and when the virtual scene is the virtual scene of the multi-person game, the virtual scene can display the editable control in the pause state. The embodiment of the application can configure the control in the running process of the virtual scene, so that the real-time control configuration can be randomly carried out according to the game experience, the operation of additionally opening the control configuration interface is avoided, and the man-machine interaction efficiency of the overall configuration control can be improved.
In some embodiments, displaying at least one control applied to the virtual scene in the human-computer interaction interface in step 101 may be achieved by the following technical scheme: displaying a first control applied to the virtual scene, wherein the first control is a control which is not adjusted in a first running time of the virtual scene; or displaying a second control applied to the virtual scene, wherein the second control is a control adjusted within a second runtime length of the virtual scene.
As an example, the first operation duration is a length of a period from a first time to a current time, and the second operation duration is a length of a period from a second time to the current time, the first time being earlier than the second time. For example, the first time is the same time before 10 days, the second time is the same time before 1 day, the control which is not configured in the last 10 days is displayed, or the control which is configured in the last 1 day is displayed, and the control which is not configured in the longer time or the control which is just configured recently is displayed, so that a user can be prevented from selecting from a plurality of controls, the configuration efficiency of the controls can be effectively improved, and the utilization efficiency of display resources can be effectively improved by only displaying at least one control which is ranked at the front.
In some embodiments, a first control which is not adjusted in a first operation time length of the virtual scene and a second control which is adjusted in a second operation time length of the virtual scene can be simultaneously displayed, but different display styles are applied to the display of the first control and the second control, so that the first control and the second control are distinguished, the efficiency of selecting target controls subsequently is effectively improved, and the configuration efficiency of the controls is improved.
In some embodiments, displaying at least one control applied to the virtual scene in the human-computer interaction interface in step 101 may be achieved by the following technical scheme: acquiring configuration data of a current account number logging in a virtual scene for each control, wherein the configuration data comprises configuration frequency of the control and configuration parameters of each configuration; the following processing is performed by the first neural network model: extracting configuration features from the configuration data, and mapping the configuration features into first probabilities of each control; and performing descending order sorting processing based on the first probability on all the controls, and displaying at least one control with the front sorting in the man-machine interaction interface. The intelligent degree and accuracy of displaying at least one control can be improved through the neural network model, the efficiency of selecting target controls subsequently is effectively improved, the configuration efficiency of the controls is improved, and the utilization efficiency of display resources can be effectively improved through a mode of displaying at least one control with the front sequence only.
As an example, the configuration frequency is the number of configuration times in a unit time, the configuration parameters of each configuration are the configuration parameters of the response area and the configuration parameters of the layout position, sample configuration features, such as sample configuration parameter features and sample configuration frequency features, of each sample control are collected in the sample virtual scene, a training sample is constructed according to the collected sample configuration features, the training sample is used as input of a first neural network model to be trained, whether the sample control is displayed in the sample virtual scene or not is used as labeling data, when the sample control is displayed in the sample virtual scene, the labeling of the sample control is 1, when the sample control is not displayed in the sample virtual scene, the labeling of the sample control is 0, and based on the training sample and the labeling data, whether a certain control needs to be displayed or not can be determined directly through the first neural network model afterwards.
In some embodiments, referring to fig. 4B, fig. 4B is a control processing method of a virtual scene according to an embodiment of the present application, in step 102, for a target control selected in at least one control, a corresponding selection mark is displayed, which may be implemented in step 1021 in fig. 4B.
In step 1021, responsive to a selection operation for at least one control, a corresponding selected marker is displayed for at least one target control selected from the at least one control.
As an example, referring to fig. 6, fig. 6 is an interface schematic diagram of a control processing method of a virtual scene provided by the embodiment of the present application, a user-defined key layout setting interface is displayed in a human-computer interaction interface 601, a plurality of controls 602 related to each function in the virtual scene are displayed in the human-computer interaction interface 601, in response to a clicking operation of an account for any control, a selected control 602 is a target control, and a highlighted outer frame of the selected control 602 is a selected mark. By selecting the target control based on the selection control, the target control can be ensured to be the control which the user actually wants to configure, so that the control configuration accuracy is improved.
In some embodiments, when the number of the at least one control is multiple, displaying the corresponding selected mark for the selected target control in the at least one control in step 102 may be implemented by the following technical scheme: aiming at a plurality of controls, sorting processing based on specific dimensions is carried out, the controls with the front sorting are used as selected target controls, and corresponding selected marks are displayed; wherein the specific dimensions include: the method comprises the steps of last use time, use frequency, configuration frequency, response sensitivity, response score rate and response sensitivity, wherein the response sensitivity is used for representing the probability of successful triggering of the control, and the response score rate is used for representing the probability of achievement after the control is successfully triggered. The user operation efficiency can be improved through an automatic selection process, so that the man-machine interaction efficiency is improved.
As an example, a plurality of target controls can be determined in an intelligent manner, sorting processing of specific dimensions is performed on all the controls, different sorting manners are adopted for different specific dimensions, for example, when the specific dimensions comprise the latest use time, the use frequency and the configuration frequency, the sorting manner is descending sorting, when the specific dimensions are response sensitivity and response score rate, the sorting manner is ascending sorting, at least one control in front of the sorting is used as the target control, the response sensitivity is used as an example for explanation, for example, each 100 times of triggering control a can be determined from historical operation data of the control a, wherein the number of times of triggering corresponding functions is 50, the response sensitivity is 50% and each 100 times of triggering control B can be determined from historical operation data of the control B, the number of times of triggering corresponding functions is 70, the response sensitivity is 70%, the ascending sorting is performed based on the response sensitivity, thereby taking the control a in front of the sorting as the target control, the corresponding control a is displayed, the response score is used as the target control, for example, each 100 times of triggering control a can be obtained from the historical operation score is determined, the response score is 50% after the control a is successfully triggered, the response score is obtained from the historical operation score is 100 times of the control B, the response score is obtained from the historical operation score is determined as the response score is 100, for example, the response score is obtained from the historical operation score is 50 after the control a is triggered, the response score is obtained from the historical operation score is obtained from the response score is 100, and the response score is obtained from the response score is about 100, for the response score is about 70 as 70, and taking the top 1 control as a target control, namely taking the control A as the target control, and displaying a selected mark corresponding to the control A, namely representing that the control A is in a selected state.
In some embodiments, in step 102, for the selected target control in the at least one control, displaying the corresponding selected mark may be implemented by the following technical scheme: acquiring historical operation data and a historical interaction result aiming at least one control in a virtual scene; the following processing is performed by the second neural network model: extracting operation characteristics from the historical operation data, and extracting result characteristics from the historical interaction results; performing fusion processing on the operation characteristics and the result characteristics to obtain first fusion characteristics; mapping the first fusion characteristic into a second probability that each control does not accord with the operation habit of the account; and performing descending order sorting processing based on the second probability on the plurality of controls, taking the plurality of controls with the top sorting as selected target controls, and displaying corresponding selected marks. The intelligent degree and accuracy of the selected target control can be improved through the neural network model, and the configuration efficiency of the control is effectively improved.
As an example, the historical operation data is parameter data for each operation of the control, including an operation mode such as pressing, clicking, and the like, and further includes an operation environment such as an operation at defense, an operation at attack, an operation at movement, and the like, the historical interaction result is a result generated after each operation of the control, such as a shooting score, a squat avoidance success, and the like, in a training process, sample historical operation data and sample historical interaction results of each sample control are collected in a sample virtual scene, sample operation features are extracted from the sample historical operation data, sample result features are extracted from the sample historical interaction results, a training sample is constructed according to the collected sample operation features and sample result features, a training sample is used as an input of a second neural network model to be trained, and whether the sample control is selected in the sample virtual scene as labeling data or not is determined, when the sample control is selected in the sample virtual scene, the labeling of the sample control is 1, when the sample control is not selected in the sample virtual scene, the labeling of the sample control is 0, the second neural network model is selected based on the training sample and the labeling data, and whether the second neural network model can be directly selected through the second neural model.
In some embodiments, the human-computer interaction interface includes an intelligent recognition control, where the intelligent recognition control is configured to switch between an on state and an off state when triggered, and the on state characterizes an on intelligent recognition mode; in response to the operation for the human-computer interaction interface, before the new control for replacing the target control is displayed at the contact position of the human-computer interaction interface, in response to the triggering operation for the intelligent recognition control, displaying that the intelligent recognition control is in an open state, at least one of the following processes is performed: displaying prompt information, wherein the prompt information is used for prompting implementation operation; displaying a selected mark of the intelligent identification control; and displaying that at least one of the position control and the style control in the man-machine interaction interface is in a non-triggerable state. The intelligent recognition control can guide the user to generate a new control in a mode of directly operating the man-machine interaction interface, so that the man-machine interaction efficiency is improved.
As an example, referring to fig. 7, fig. 7 is an interface schematic diagram of a control processing method for a virtual scene provided by the embodiment of the present application, a user-defined key layout setting interface is displayed in a human-computer interaction interface 701, a target control 702 is displayed in the human-computer interaction interface 701, an intelligent recognition control 704 and a control size adjustment progress bar 706 are displayed in a top operation field 703 of the human-computer interaction interface 701, in response to a clicking operation of an account number on the intelligent recognition control 704, the intelligent recognition control 704 is in a highlight state, a highlighted display style is used as a selection mark, a prompt message 705 is displayed in a lower area of the top operation field, and is used for prompting that at least one of a position control and a style control in the human-computer interaction interface is currently in a non-triggerable state, such as a hidden control size adjustment progress bar and a hidden position adjustment button.
In some embodiments, referring to fig. 4C, fig. 4C is a control processing method of a virtual scene provided by the embodiment of the present application, step 103 may execute steps 104 to 107 in fig. 4C after a new control for replacing a target control is displayed at a contact position with a human-computer interaction interface.
In step 104, the smart identification control is hidden.
In step 105, a return control is displayed along with a completion control.
In step 106, in response to the triggering operation for the completion control, a new control is stored.
In step 107, in response to a trigger operation for the return control, the new control is hidden and the display target control is restored.
As an example, referring to fig. 8A, fig. 8A is an interface schematic diagram of a control processing method for a virtual scene provided by the embodiment of the present application, a target control 802A is displayed in a human-computer interaction interface 801A, an intelligent recognition control 804A is displayed in a top operation column 803A of the human-computer interaction interface 801A, the intelligent recognition control 804A is highlighted, a new control 805A with an inclination angle is automatically generated according to the longest diameter and the shortest diameter of a contact area of a clicking operation in response to clicking a certain position in the human-computer interaction interface 801A by a finger, the intelligent recognition control is hidden, a return control 806A and a completion control 807A are displayed, the function of performing intelligent recognition for the operation to generate a new control can be withdrawn in response to triggering operation for the return control 806A, the new control generated for the operation is recorded in response to triggering operation for the completion control 807A, and the function of performing intelligent recognition for the operation to generate a new control is withdrawn automatically. The configuration flexibility of the control can be effectively improved through the man-machine interaction process from step 104 to step 107.
In some embodiments, the human-computer interaction interface includes an intelligent recognition control, where the intelligent recognition control is configured to switch between an on state and an off state when triggered, the off state representing an off intelligent recognition mode; after displaying the corresponding selected mark in step 102, responding to a triggering operation for the intelligent recognition control, and displaying that the intelligent recognition control is in a closed state; responding to the position moving operation of the position control included in the man-machine interaction interface, and moving the target control in the man-machine interaction interface; and responding to the style adjustment operation for the style control included in the man-machine interaction interface, and adjusting the display style of the target control based on the style adjustment operation. According to the embodiment of the application, the user can select the response area of the control to be adjusted through quantitative configuration operation or adjust the layout position of the control in an up-and-down moving mode, namely, the user can flexibly select the control configuration mode, and the man-machine interaction efficiency is improved.
As an example, referring to fig. 7, fig. 7 is an interface schematic diagram of a control processing method of a virtual scene provided by the embodiment of the present application, a user-defined key layout setting interface is displayed in a human-computer interaction interface 701, a target control 702 is displayed in the human-computer interaction interface 701, an intelligent recognition control 704, a control size adjustment progress bar 706 and a position control are displayed in a top operation column 703 of the human-computer interaction interface 701, the intelligent recognition control 704 is in a gray state in response to a clicking operation of an account number on the intelligent recognition control 704, and the intelligent recognition control is represented as being in a closed state, so that a subsequent user can adjust a response area of the control through a quantitative configuration operation or adjust a layout position of the control through a mode of moving up and down, and move the target control in the human-computer interaction interface in response to a position movement operation on the position control included in the human-computer interaction interface; in response to a style adjustment operation for a style control included in the human-computer interaction interface, for example, a style adjustment operation for the control size adjustment progress bar 706, a display style of the target control is adjusted based on the style adjustment operation.
In some embodiments, the number of target controls is a plurality of and selected in batches, the operation is a batch of touch operations in the man-machine interaction interface, and the number of touch operations is the same as the number of target controls; step 103, in response to the operation for the man-machine interaction interface, displaying a new control for replacing the target control at the contact position between the operation and the man-machine interaction interface, which can be realized by the following technical scheme: in response to each touch operation for the man-machine interaction interface, performing the following processing: displaying a plurality of selection controls, wherein the plurality of selection controls are in one-to-one correspondence with a plurality of displayed target controls; and responding to triggering operation aiming at any one selection control, and displaying a new control for replacing the target control to be replaced at a contact position, wherein the target control to be replaced is the target control corresponding to the currently triggered selection control. The configuration efficiency of the controls can be improved by selecting target controls in batches and performing batch touch operation, so that the man-machine interaction efficiency is improved.
As an example, referring to fig. 8B, fig. 8B is an interface schematic diagram of a control processing method of a virtual scene provided by an embodiment of the present application, in which a right probe target control 802B and a squat target control 803B are displayed in a human-computer interaction interface 801B, a right probe selection control 804B and a squat selection control 805B are displayed in response to clicking a certain position in the human-computer interaction interface 801B by a finger, and a new control 806B for replacing the right probe control is displayed in a contact position in response to a triggering operation for the right probe selection control 804B.
As an example, referring to fig. 8C, fig. 8C is an interface schematic diagram of a control processing method for a virtual scene provided by the embodiment of the present application, a right probe target control 802C with a selection mark and a squatting target control 803C are displayed in a human-computer interaction interface 801C, a control list 804C including a plurality of control options is also displayed in the human-computer interaction interface 801C, each control option corresponds to a target control, and in response to a sequential selection operation for each option in the control list, a sequential selection order is obtained, for example, first, an option corresponding to the right probe target control is selected, then an option corresponding to the squatting target control is selected, and in response to each touch operation for the human-computer interaction interface, the following processes are performed: the method comprises the steps of displaying a new control for replacing a target control to be replaced at the contact position of the touch operation and the human-computer interaction interface, wherein the target control to be replaced is a target control which is not replaced for the first time in a sequence selection sequence, namely, displaying a new control 805C for replacing a right probe target control at the contact position of the first touch operation and the human-computer interaction interface, and displaying a new control 806C for replacing a squatting target control at the contact position of the second touch operation and the human-computer interaction interface in response to the second touch operation.
In some embodiments, the number of target controls is a plurality of and is selected in batches in sequence, the operation is a batch of touch operation in the human-computer interaction interface, and the number of touch operations is the same as the number of target controls; step 103, in response to the operation for the man-machine interaction interface, displaying a new control for replacing the target control at the contact position between the operation and the man-machine interaction interface, which can be realized by the following technical scheme: in response to each touch operation for the human-computer interaction interface, performing the following processing: displaying a new control for replacing the target control to be replaced at the contact position of the touch operation and the man-machine interaction interface; the target control to be replaced is the target control which is not replaced in the sequence. The configuration efficiency of the controls can be improved by selecting target controls in batches and performing batch touch operation, so that the man-machine interaction efficiency is improved.
As an example, referring to fig. 8D, fig. 8D is an interface schematic diagram of a control processing method of a virtual scene provided by an embodiment of the present application, in which a right probe target control 802D with a selected mark and a squat target control 803D are displayed in a human-computer interaction interface 801D, in response to each touch operation for the human-computer interaction interface, the following processes are performed: the method comprises the steps of displaying a new control for replacing a target control to be replaced at the contact position of the touch operation and the human-computer interaction interface, wherein the target control to be replaced is a target control which is not replaced for the first time in a sequence selection sequence, namely, displaying a new control 805D for replacing a right probe target control at the contact position of the first touch operation and the human-computer interaction interface, and displaying a new control 806D for replacing a squatting target control at the contact position of the second touch operation and the human-computer interaction interface in response to the second touch operation. For example, in response to the first touch operation, five target controls with selection marks are displayed in the human-computer interaction interface, the control a is selected for the first time by the selection operation of the step 1021, then the touch operation can generate a new control for replacing the control a, in response to the second touch operation, four controls are displayed in the human-computer interaction interface, and in response to the second touch operation, the control B is selected for the first time in the remaining four controls, then the touch operation can generate a new control for replacing the control B.
In some embodiments, the number of target controls is a plurality of and selected in batches, the operation is a multi-touch operation in the man-machine interaction interface, and the number of contacts of the multi-touch operation is the same as the number of the target controls; step 103 is to display a new control for replacing the target control at the contact position of the operation and man-machine interaction interface, which can be realized by the following technical scheme: and displaying a plurality of new controls in a plurality of contact positions corresponding to the plurality of contacts of the multi-point touch operation one by one, wherein the new controls in each contact position are used for replacing the target controls of the corresponding contact positions. The efficiency of position configuration of the control can be improved through multi-point touch operation, and the man-machine interaction efficiency is improved.
In some embodiments, displaying a plurality of new controls in a plurality of contact positions corresponding to a plurality of contacts of the multi-touch operation one by one may be achieved by the following technical scheme: for each target control, the following processing is performed: displaying a new control for replacing the target control in a contact position closest to the original position among the plurality of contact positions; wherein the home position is a displayed position in response to the target control prior to the operation. The embodiment of the application enables the display position of the new control to be closer to the original position of the corresponding target control, thereby enabling the layout of the new control to conform to the layout habit of the user.
As an example, referring to fig. 8E, fig. 8E is an interface schematic diagram of a control processing method of a virtual scene provided by the embodiment of the present application, in which a right probe target control 802E with a selected mark and a squat target control 803E are displayed in a human-computer interaction interface 801E, the number of contacts of a multi-touch operation is the same as that of the target controls, the number of contacts is 2, in response to the multi-touch operation for the human-computer interaction interface, in two contact positions corresponding to two contacts of the multi-touch operation, since in the two contact positions, a contact position a is closer to a distance of an original position of the right probe target control 802E than a contact position B, a contact position B is closer to a distance of an original position of the squat target control 803E than a contact position a, so that a new control 805E corresponding to the right probe target control 802E is displayed in the contact position a, and a new control 803E corresponding to the squat target control is displayed in the contact position B.
In some embodiments, prior to displaying the plurality of new controls, obtaining functional data for the plurality of target controls and location data for the plurality of contact locations; the following processing is performed for each target control through the third neural network model: extracting functional features from the functional data and extracting position features from the position data; fusion processing is carried out on the functional characteristics and the position characteristics to obtain second fusion characteristics; mapping the second fusion feature into a third probability that a new control corresponding to the target control is at each contact position; and carrying out linear programming processing based on the third probability that the new control of each target control is at each contact position, and obtaining the contact position of the new control for replacing each target control. The intelligent degree and accuracy of matching contact positions for a plurality of target controls can be improved through the neural network model, and the configuration efficiency of the controls is effectively improved.
As an example, the functional data is used for representing the function of the target control, the position data is used for representing the position of the contact position, sample functional data and sample position data of each sample control are collected in a sample virtual scene, sample functional characteristics are extracted from the sample functional data, sample position characteristics are extracted from the sample position data, a training sample is built according to the collected sample functional characteristics and sample position characteristics, the training sample is used as input of a third neural network model to be trained, the contact position of the sample control corresponding to a new control in the sample virtual scene is used as labeling data, the third neural network model is trained based on the training sample and the labeling data, and therefore the third probability that each target control is located at each contact position can be determined through the third neural network model, for example, the third probability that a new control of a shooting target control is located at the contact position a is 0.7, the third probability that a new control of a squatting target control is located at the contact position B is 0.3, the third probability that the new control of a squatting target control is located at the contact position a is 0.6, the new control of a squatting target control is located at the contact position B can be determined through the linear control, and the third probability that the new control of the control B is located at the contact position of the new control is located at the contact position B is located at the contact position is 0.4.
In some embodiments, the opacity of the new control is positively correlated to an operating parameter corresponding to the operation of the new control, wherein the operating parameter includes at least one of: the contact time of the operation and the contact force of the operation. The opacity of control display is controlled through the contact time and the contact force, so that the opacity setting of the control is also related to the operation aiming at the human-computer interaction interface in the step 103, which is equivalent to the fact that the position, the opacity and the response area of a certain control can be rapidly configured through single operation, and the control configuration efficiency is effectively improved.
As an example, embodiments of the present application may additionally detect a contact duration and a contact force based on identifying a contact area and associate at least one of the contact duration and the contact force with the opacity.
Taking the contact time as an example, the interval of the contact time may be associated with different opacities, i.e. a piecewise association between the contact time and the opacity may be formed, for example, when the contact time is greater than 0.4 seconds, the opacity of the control is 100%, when the contact time is greater than 0.3 seconds and not greater than 0.4 seconds, the opacity of the control is 75%, when the contact time is less than 0.3 seconds and greater than 0.2 seconds, the opacity of the control is 50%, when the contact time is less than 0.2 seconds and greater than 0 seconds, the opacity of the control is 25%, and a linear association between the contact time and the opacity may be formed, and as the contact time is longer, the opacity is higher, the different contact times correspond to different opacities.
Taking the contact time and the contact force as an independent variable and the opacity as a dependent variable, the contact time and the association between the contact force and the opacity can be constructed to control the opacity of the control display.
In some embodiments, displaying a new control for replacing the target control at the contact position of the operation interface with the human-computer interaction in step 103 may be achieved by the following technical scheme: displaying a new control with target opacity in a contact position of the operation and man-machine interaction interface; or in operating contact with the human-machine interface, new controls from zero opacity to target opacity are displayed.
As an example, the new control may be displayed directly with the target opacity, or a change in the new control from completely transparent to the target opacity may be displayed, thereby improving the display diversity of replacing the target control with the new control, and may improve the utilization of the display resource, providing a transition between the new control and the target control.
In some embodiments, when any of the following processes is performed in displaying a new control from zero opacity to target opacity: a process of changing the display target control from original opacity to zero opacity; displaying the target control based on the original opacity until the new control is in the target opacity, and hiding the target control; the target control is hidden starting with the opacity of the new control being non-zero.
As an example, in the process of changing the opacity of the new control, the target control may be changed from original opacity to zero opacity, that is, the target control is not displayed in the human-computer interaction interface, in the process of changing the opacity of the new control, the target control is always displayed based on the original opacity before the new control reaches the target opacity, in the process of changing the opacity of the new control, the target control is hidden, and diversified display effects are provided through different display modes of the target control, so that the utilization rate of display resources is improved.
In some embodiments, step 103 generates an ellipse with a longest wheelbase of a contact area with the human-machine interface as a long radius and a shortest wheelbase of a contact area with the human-machine interface as a short radius before operating the contact area with the human-machine interface to display a new control for replacing the target control; an ellipse is determined as the response area of the new control.
As an example, taking the longest wheelbase of the contact area of the operation and the human-computer interaction interface as a long radius, taking the shortest wheelbase of the contact area of the operation and the human-computer interaction interface as a short radius, generating an ellipse, and determining the ellipse as a response area of the new control, so that the response area is attached to the actual operation of a user, and the user is ensured to successfully trigger the new control subsequently.
In some embodiments, the target control has multiple functions associated with it, and different functions correspond to different trigger pressures; responding to the operation aiming at the man-machine interaction interface, and displaying a plurality of functions to be matched with the pressure of the operation; in response to a selection operation for a target function of the plurality of functions, the pressure of the operation is identified as a minimum trigger pressure for the target function.
As an example, the target control is associated with two functions, the minimum trigger pressure of the shooting function is greater than the minimum trigger pressure of the squatting function, the minimum trigger pressure is the minimum pressure capable of successfully triggering the control, the operation for the man-machine interaction interface is a touch operation, the shooting function and the squatting function are displayed in response to the touch operation for the man-machine interaction interface, and the shooting function can be triggered as long as the pressure of the actual operation is greater than or equal to the pressure of the touch operation in response to the selection operation for the shooting function.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In some embodiments, an account logs in a client (such as a game application of a network edition) running on a terminal, the client responds to a selection operation of the account for a control, the selected target control is displayed, the client responds to an operation of the account for a human-computer interaction interface, the contact position and the contact area of the operation are sent to a server through a network, the server calculates display data of a new control based on the contact position and the contact area, the display data is sent to the client, the new control is displayed at the contact position of the human-computer interaction interface of the client based on the display data, and a response area of the new control is matched with the contact area.
In some embodiments, referring to fig. 5A, a custom key layout setting interface is displayed in the human-machine interaction interface 501A, and a plurality of controls 502A associated with each function in the virtual scene are displayed in the custom key layout setting interface, for example, combat-related buttons including a left-right shoot button, a mirror open button, a jump-squat button, a bullet-change button, a left-right probe button, and the like.
In some embodiments, referring to fig. 6, a plurality of controls 602 related to each function in the virtual scene are displayed in the man-machine interaction interface 601, in response to clicking operation of the account number on any control, the selected control 602 is a target control, the highlighted outer frame of the selected control 602 is a selected mark, the intelligent recognition control 604 is displayed in the top operation field 603 of the man-machine interaction interface 601, and the intelligent recognition control may be always displayed in the top operation field, or the intelligent recognition control 604 is displayed in the top operation field 603 when the control 602 is selected.
In some embodiments, referring to fig. 7, a target control 702 is displayed in a human-computer interaction interface 701, an intelligent recognition control 704 and a control size adjustment progress bar 706 are displayed in a top operation field 703 of the human-computer interaction interface 701, in response to a click operation of an account number on the intelligent recognition control 704, the intelligent recognition control 704 is highlighted, a prompt message 705 is displayed in a lower area of the top operation field for prompting that intelligent recognition can be performed for operation currently to generate a new control, and the control size adjustment progress bar is hidden.
In some embodiments, referring to fig. 8A, a target control 802A is displayed in a human-computer interaction interface 801A, an intelligent recognition control 804A is displayed in a top operation field 803A of the human-computer interaction interface 801A, the intelligent recognition control 804A is highlighted, a new control 805A with an inclination angle is automatically generated according to the longest diameter and the shortest diameter of a contact area of a point operation in response to clicking a certain position in the human-computer interaction interface 801A by a finger, the intelligent recognition control is hidden, a return control 806A and a completion control 807A are displayed, the operation of clicking the human-computer interaction interface this time can be canceled in response to a triggering operation for the return control 806A, the function of performing intelligent recognition for the operation to generate the new control can be automatically exited, the new control generated by the operation is recorded in response to a triggering operation for the completion control 807A, and the function of performing intelligent recognition for the operation to generate the new control is automatically exited.
In some embodiments, referring to fig. 9, fig. 9 is a schematic diagram of new control generation of a control processing method of a virtual scene provided by the embodiment of the present application, a contact area of an operation and a man-machine interaction interface is obtained, a longest axis and a shortest axis of the contact area are obtained, an ellipse is generated based on the longest axis and the shortest axis, the longest axis is a long diameter of the ellipse, the shortest axis is a short diameter of the ellipse, and the ellipse is used as a response area to generate the new control.
In some embodiments, referring to fig. 10A, fig. 10A is an interface schematic diagram of a control processing method for a virtual scene provided by the embodiment of the present application, a custom key layout setting interface is displayed in a human-computer interaction interface 1001A, a target control 1002A is displayed in the custom key layout setting interface 1001A, an intelligent recognition control is displayed in a top operation field of the custom key layout setting interface 1001A, the intelligent recognition control is highlighted, in response to clicking a certain position in the human-computer interaction interface 1001A by a finger, a new control 1003A with an inclination angle is automatically generated according to the longest diameter and the shortest diameter of a contact area of a clicking operation, the opacity of the new control 1003A is positively correlated with the contact time, for example, the contact time is 0.5 seconds, the opacity is 75%, the intelligent recognition control is hidden, and a return control and a completion control are displayed, in response to the triggering operation for the return control, the operation for the present clicking the human-computer interaction interface can be canceled, the intelligent recognition for the operation for the new control is automatically withdrawn, in response to the triggering operation for the completion control, the new control generated for the present operation is automatically withdrawn for the recording operation, and the new control for the intelligent recognition operation for the operation is automatically withdrawn for the operation for the completion.
In some embodiments, referring to fig. 10B, fig. 10B is an interface schematic diagram of a control processing method for a virtual scene provided by the embodiment of the present application, a custom key layout setting interface is displayed in a human-computer interaction interface 1001B, a target control 1002B is displayed in the custom key layout setting interface, an intelligent recognition control is displayed in a top operation column of the custom key layout setting interface, the intelligent recognition control is in a highlight state, in response to clicking a certain position in the human-computer interaction interface 1001B by a finger, a new control 1003B with an inclination angle is automatically generated according to the longest diameter and the shortest diameter of a contact area of a clicking operation, opacity of the new control 1003B is positively correlated with the contact time, for example, the contact time is 0.2 seconds, opacity is 25%, a return control is hidden, and a completion control is displayed, in response to a triggering operation for the return control, a function for the human-computer interaction interface of this time can be canceled, a function for the operation for the new control is automatically canceled, in response to a triggering operation for the control of the completion, a new control generated for this operation is recorded, and a new control for the operation is automatically canceled for the operation for generating a new function.
In some embodiments, referring to fig. 11, fig. 11 is a flowchart of a control processing method of a virtual scene according to an embodiment of the present application, in step 1101, a clicking operation for any control is received, specifically, the control is a main operation related control, for example: the method comprises the steps of controlling shooting controls, opening a mirror, jumping a crouching control, changing a bullet control and controlling a probe control, highlighting an outer frame of the control in step 1102, when a clicking operation for the intelligent recognition control is received, executing steps 1103-1105, highlighting the intelligent recognition control in step 1103, displaying an operation prompt document in step 1104, hiding a control size adjustment progress bar, executing step 1106 when a single-point touch screen of a finger is received and a hand is released (for example, clicking operation), recognizing and recording a geometric bounding box of a contact area and coordinates of a contact position in step 1106, generating an elliptic new control at the contact position based on the longest axis and the shortest axis in recorded data in step 1107, hiding the intelligent recognition control in step 1108, executing operations of the intelligent recognition control in step 1109, displaying a finished control and a returned control in step 1110, replacing the original control with a new style and a new position in step 1111, specifically replacing an original style (including size and shape) with the new style, and a new position in step 1113, and generating an original style without changing the original style and the original style in step 1113.
The control processing method of the virtual scene provided by the embodiment of the application can generate the control with the shape fitting contact area at the contact position by calculating the contact area of the capacitive screen of the electronic equipment. Specifically, the contact area between the finger and the capacitive screen is identified to quickly generate a matching correspondingly shaped control and may be generated directly at the contact location. The control processing method for the virtual scene provided by the embodiment of the application can greatly reduce the configuration time for manually adjusting the control configuration, improve the custom configuration efficiency of the control, and has higher man-machine interaction efficiency and flexibility.
It will be appreciated that in the embodiments of the present application, related data such as user information is involved, and when the embodiments of the present application are applied to specific products or technologies, user permissions or agreements need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It will be appreciated that in the embodiments of the present application, related data such as user information is involved, and when the embodiments of the present application are applied to specific products or technologies, user permissions or agreements need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
Continuing with the description below of an exemplary architecture of the control processing device 455 implemented as a software module for a virtual scene provided by embodiments of the present application, in some embodiments, as shown in fig. 3, the software modules stored in the control processing device 455 for a virtual scene of the memory 450 may include: the display module 4551 is configured to display at least one control applied to the virtual scene in the human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene; the selection module 4552 is configured to display a corresponding selection mark for a selected target control in the at least one control; a replacing module 4553, configured to display a new control for replacing the target control at a contact position between the operation and the human-computer interaction interface in response to the operation on the human-computer interaction interface; the response area of the new control is matched with the contact area operated in the human-computer interaction interface.
In some embodiments, the display module 4551 is further configured to: displaying a control configuration interface in the man-machine interaction interface; displaying at least one control in a control configuration interface; the control configuration interface is used for displaying before running the virtual scene or displaying when the virtual scene is in a pause state.
In some embodiments, the display module 4551 is further configured to: displaying a virtual scene in a human-computer interaction interface; displaying at least one control applied to the virtual scene in a picture of the virtual scene; wherein the virtual scene is in an operating state or a pause state.
In some embodiments, the display module 4551 is further configured to: displaying a first control applied to the virtual scene, wherein the first control is a control which is not adjusted in a first running time of the virtual scene; or displaying a second control applied to the virtual scene, wherein the second control is a control adjusted within a second running time of the virtual scene.
In some embodiments, the display module 4551 is further configured to: acquiring configuration data of a current account number of a login virtual scene for each control, wherein the configuration data comprises configuration frequency of the control and configuration parameters of each configuration; the following processing is performed by the first neural network model: extracting configuration features from the configuration data, and mapping the configuration features to first probabilities of each control; and performing descending order sorting processing based on the first probability on all the controls, and displaying a plurality of controls with front sorting in the man-machine interaction interface.
In some embodiments, the selecting module 4552 is further configured to: responsive to a selection operation for at least one control, a corresponding selected label is displayed for at least one target control selected from the at least one control.
In some embodiments, when the number of at least one control is a plurality, the selecting module 4552 is further configured to: aiming at a plurality of controls, sorting processing based on specific dimensions is carried out, the controls with the front sorting are used as selected target controls, and corresponding selected marks are displayed; wherein the specific dimensions include: the method comprises the steps of recent use time, use frequency, configuration frequency, response sensitivity, response score rate and response sensitivity, wherein the response sensitivity is used for representing the probability of successful triggering of the control, and the response score rate is used for representing the probability of generating the score after the control is successfully triggered.
In some embodiments, the selecting module 4552 is further configured to: acquiring historical operation data and a historical interaction result aiming at least one control in a virtual scene; the following processing is performed by the second neural network model: extracting operation characteristics from the historical operation data, and extracting result characteristics from the historical interaction results; performing fusion processing on the operation characteristics and the result characteristics to obtain first fusion characteristics; mapping the first fusion characteristic into a second probability that each control does not accord with the operation habit of the account; and performing descending order sorting processing based on the second probability on the plurality of controls, taking the plurality of controls with the top order as the selected target controls, and displaying the corresponding middle selection marks.
In some embodiments, the human-computer interaction interface includes an intelligent recognition control, where the intelligent recognition control is configured to switch between an on state and an off state when triggered, and the on state characterizes an on intelligent recognition mode; before the new control for replacing the target control is displayed at the contact position with the human-computer interaction interface in response to the operation for the human-computer interaction interface, the replacing module 4553 is further configured to: in response to a triggering operation for the intelligent recognition control, displaying that the intelligent recognition control is in an on state, performing at least one of the following: displaying prompt information, wherein the prompt information is used for prompting implementation operation; displaying a selected mark of the intelligent identification control; and displaying that at least one of the position controls and the style controls in the human-computer interaction interface is in a non-triggerable state.
In some embodiments, after the new control for replacing the target control is displayed at the contact position with the human-machine interface, the replacing module 4553 is further configured to: hiding the intelligent identification control; displaying a return control and a completion control; storing a new control in response to a triggering operation for the completion control; and in response to the triggering operation for the return control, hiding the new control and restoring the display target control.
In some embodiments, the human-computer interaction interface includes an intelligent recognition control, where the intelligent recognition control is configured to switch between an on state and an off state when triggered, the off state representing an off intelligent recognition mode; after displaying the corresponding selected mark, the replacing module 4553 is further configured to: responding to triggering operation for the intelligent recognition control piece, and displaying that the intelligent recognition control piece is in a closed state; responding to the position moving operation of the position control included in the man-machine interaction interface, and moving the target control in the man-machine interaction interface; and responding to the style adjustment operation aiming at the style control included in the man-machine interaction interface, and adjusting the display style of the target control based on the style adjustment operation.
In some embodiments, the number of target controls is a plurality of and selected in batches, the operation is a batch of touch operations in the man-machine interaction interface, and the number of touch operations is the same as the number of target controls; the replacement module 4553 is further configured to: in response to each touch operation for the human-computer interaction interface, performing the following processing: displaying a plurality of selection controls, wherein the plurality of selection controls are in one-to-one correspondence with a plurality of displayed target controls; and responding to the triggering operation aiming at any one of the selection controls, and displaying a new control for replacing the target control to be replaced at the contact position, wherein the target control to be replaced is the target control corresponding to the currently triggered selection control.
In some embodiments, the number of target controls is a plurality of and is selected in batches in sequence, the operation is a batch of touch operation in the human-computer interaction interface, and the number of touch operations is the same as the number of target controls; the replacement module 4553 is further configured to: in response to each touch operation for the human-computer interaction interface, performing the following processing: displaying a new control for replacing the target control to be replaced at the contact position of the touch operation and the man-machine interaction interface; wherein the target control to be replaced is the first target control not replaced in the sequence.
In some embodiments, the number of target controls is a plurality of and selected in batches, the operation is a multi-touch operation in the man-machine interaction interface, and the number of contacts of the multi-touch operation is the same as the number of the target controls; the replacement module 4553 is further configured to: and displaying a plurality of new controls in a plurality of contact positions corresponding to the plurality of contacts of the multi-touch operation one by one, wherein the new controls in each contact position are used for replacing the target controls of the corresponding contact positions.
In some embodiments, the replacement module 4553 is further to: for each target control, the following processing is performed: displaying a new control for replacing the target control in a contact position closest to the original position among the plurality of contact positions; wherein the home position is a display position responsive to the target control prior to the operation.
In some embodiments, before displaying the plurality of new controls, the replacement module 4553 is further configured to: acquiring functional data of a plurality of target controls and position data of a plurality of contact positions; the following processing is performed for each target control through the third neural network model: extracting functional features from the functional data and extracting position features from the position data; fusion processing is carried out on the functional characteristics and the position characteristics to obtain second fusion characteristics; mapping the second fusion feature into a third probability that a new control corresponding to the target control is at each contact position; and carrying out linear programming processing based on the third probability that the new control of each target control is at each contact position, and obtaining the contact position of the new control for replacing each target control.
In some embodiments, the opacity of the new control is positively correlated to an operating parameter corresponding to the operation of the new control, wherein the operating parameter includes at least one of: the contact time of the operation and the contact force of the operation.
In some embodiments, the replacement module 4553 is further to: displaying a new control with target opacity in a contact position of the operation and man-machine interaction interface; or in operating contact with the human-machine interface, new controls from zero opacity to target opacity are displayed.
In some embodiments, when in the process of displaying a new control from zero opacity to target opacity, the replacement module 4553 is further configured to: any one of the following processes is performed: a process of displaying a change in the target control from original opacity to zero opacity; displaying the target control based on the original opacity until the new control is in the target opacity, and hiding the target control; the target control is hidden starting with the opacity of the new control being non-zero.
In some embodiments, the replacement module 4553 is further configured to, prior to operating the contact location with the human-machine interface to display a new control for replacing the target control: generating an ellipse by taking the longest wheelbase of the contact area of the operation and human-computer interaction interface as a long radius and the shortest wheelbase of the contact area of the operation and human-computer interaction interface as a short radius; an ellipse is determined as the response area of the new control.
In some embodiments, the target control has multiple functions associated with it, and different functions correspond to different trigger pressures; the replacement module 4553 is further configured to: responding to the operation aiming at the man-machine interaction interface, and displaying a plurality of functions to be matched with the pressure of the operation; in response to a selection operation for a target function of the plurality of functions, the pressure of the operation is identified as a trigger pressure of the target function.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control processing method of the virtual scene according to the embodiment of the application.
Embodiments of the present application provide a computer readable storage medium storing executable instructions, wherein the executable instructions are stored, which when executed by a processor, cause the processor to perform a control processing method of a virtual scene provided by the embodiments of the present application, for example, a control processing method of a virtual scene as shown in fig. 4A-4C.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, on multiple computing devices distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the application, for the selected target control, in response to the operation for the man-machine interaction interface, a new control for replacing the target control is displayed at the contact position of the operation and the man-machine interaction interface, and the response area of the new control is matched with the contact area operated in the man-machine interaction interface, which is equivalent to identifying the response area and the layout position of the new control through the operation, so that the configuration efficiency of the control is improved, and the response area matched with the contact area can effectively respond to the triggering operation for the control subsequently because the contact area is generated directly by the operation, so that the configuration accuracy of the control is improved.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (25)

1. A control processing method for a virtual scene, the method comprising:
displaying at least one control applied to a virtual scene in a human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene;
displaying a corresponding selected mark aiming at a selected target control in the at least one control;
responding to the operation for the human-computer interaction interface, and displaying a new control for replacing the target control at the contact position of the operation and the human-computer interaction interface;
and the response area of the new control is matched with the contact area of the operation in the man-machine interaction interface, and the function associated with the new control is the same as the function of the target control replaced by the new control.
2. The method of claim 1, wherein displaying at least one control applied to the virtual scene in the human-machine interaction interface comprises:
Displaying a control configuration interface in the man-machine interaction interface;
displaying at least one control applied to the virtual scene in the control configuration interface;
the control configuration interface is used for displaying before running the virtual scene or displaying when the virtual scene is in a pause state.
3. The method of claim 1, wherein displaying at least one control applied to the virtual scene in the human-machine interaction interface comprises:
displaying the virtual scene in the man-machine interaction interface;
displaying at least one control applied to the virtual scene in a picture of the virtual scene;
wherein the virtual scene is in an operating state or a pause state.
4. The method of claim 1, wherein displaying at least one control applied to the virtual scene in the human-machine interaction interface comprises:
displaying a first control applied to the virtual scene, wherein the first control is a control which is not adjusted in a first running time of the virtual scene; or alternatively
And displaying a second control applied to the virtual scene, wherein the second control is a control adjusted in a second running time of the virtual scene.
5. The method of claim 1, wherein displaying at least one control applied to the virtual scene in the human-machine interaction interface comprises:
acquiring configuration data of a current account logging in the virtual scene for each control, wherein the configuration data comprises configuration frequency of the control and configuration parameters of each configuration;
the following processing is performed by the first neural network model: extracting configuration features from the configuration data, and mapping the configuration features to first probabilities of each control;
and performing descending order sorting processing on all the controls based on the first probability, and displaying at least one control with the top sorting in the man-machine interaction interface.
6. The method of claim 1, wherein the displaying the corresponding selected mark for the selected target control of the at least one control comprises:
responsive to a selection operation for the at least one control, a corresponding selected marker is displayed for at least one target control selected from the at least one control.
7. The method of claim 1, wherein when the number of the at least one control is a plurality, displaying a corresponding selection marker for a selected target control of the at least one control comprises:
Aiming at a plurality of controls, sorting processing based on specific dimensions is carried out, the controls with the front sorting are used as selected target controls, and corresponding selected marks are displayed;
wherein the particular dimensions include: the control system comprises a latest use time, a use frequency, a configuration frequency, response sensitivity and response score rate, wherein the response sensitivity is used for representing the probability that the control is successfully triggered, and the response score rate is used for representing the probability that the control is successfully triggered to generate a score.
8. The method of claim 1, wherein when the number of the at least one control is a plurality, displaying a corresponding selection marker for a selected target control of the at least one control comprises:
acquiring historical operation data and a historical interaction result aiming at the at least one control in the virtual scene;
the following processing is performed by the second neural network model: extracting operation characteristics from the historical operation data, and extracting result characteristics from the historical interaction results; performing fusion processing on the operation characteristics and the result characteristics to obtain first fusion characteristics; mapping the first fusion characteristic to a second probability that each control does not accord with the operation habit of the account;
And performing descending order sorting processing on the plurality of controls based on the second probability, taking the plurality of controls with the front order as selected target controls, and displaying corresponding selected marks.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the man-machine interaction interface comprises an intelligent identification control, wherein the intelligent identification control is used for switching between an opening state and a closing state when triggered, and the opening state represents an opening intelligent identification mode;
before displaying a new control for replacing the target control at a contact position of the operation with the human-machine interaction interface in response to the operation for the human-machine interaction interface, the method further comprises:
in response to a triggering operation for the intelligent recognition control, displaying that the intelligent recognition control is in the open state, performing at least one of the following:
displaying prompt information, wherein the prompt information is used for prompting the implementation of the operation;
displaying a selected mark of the intelligent recognition control;
and displaying that at least one of the position control and the style control in the man-machine interaction interface is in a non-triggerable state.
10. The method of claim 9, wherein after the operating displays a new control for replacing the target control at a contact location with the human-machine interaction interface, the method further comprises:
Hiding the intelligent identification control;
displaying a return control and a completion control;
responding to the triggering operation for the completion control, and storing the new control;
and responding to the triggering operation for the return control, hiding the new control and restoring and displaying the target control.
11. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the man-machine interaction interface comprises an intelligent identification control, wherein the intelligent identification control is used for switching between an opening state and a closing state when triggered, and the closing state represents a closing intelligent identification mode;
after displaying the corresponding selected indicia, the method further comprises:
responding to triggering operation for the intelligent recognition control, and displaying that the intelligent recognition control is in the closed state;
moving the target control in the human-computer interaction interface in response to a position movement operation for a position control included in the human-computer interaction interface;
and responding to a style adjustment operation for the style control included in the man-machine interaction interface, and adjusting the display style of the target control based on the style adjustment operation.
12. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The number of the target controls is a plurality of and selected in batches, the operation is a batch touch operation in the man-machine interaction interface, and the number of the touch operations is the same as the number of the target controls;
the responding to the operation of the man-machine interaction interface displays a new control for replacing the target control at the contact position of the operation and the man-machine interaction interface, and the method comprises the following steps:
in response to each of the touch operations for the human-machine interaction interface, performing the following processing:
displaying a plurality of selection controls, wherein the plurality of selection controls are in one-to-one correspondence with a plurality of displayed target controls;
and responding to triggering operation for any one of the selection controls, and displaying the new control for replacing the target control to be replaced at the contact position, wherein the target control to be replaced is the target control corresponding to the currently triggered selection control.
13. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the number of the target controls is a plurality of the target controls and the target controls are selected in batches according to the sequence, the operation is a batch touch operation in the man-machine interaction interface, and the number of the touch operations is the same as the number of the target controls;
The responding to the operation of the man-machine interaction interface displays a new control for replacing the target control at the contact position of the operation and the man-machine interaction interface, and the method comprises the following steps:
in response to each of the touch operations for the human-machine interaction interface, performing the following processing:
displaying the new control for replacing the target control to be replaced at the contact position of the touch operation and the man-machine interaction interface;
wherein the target control to be replaced is the first target control not replaced in the sequence.
14. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the number of the target controls is a plurality of the target controls and the target controls are selected in batches, the operation is a multi-point touch operation in the man-machine interaction interface, and the number of the contacts of the multi-point touch operation is the same as the number of the target controls;
and displaying a new control for replacing the target control at the contact position of the operation and the man-machine interaction interface, wherein the method comprises the following steps:
and displaying a plurality of new controls at a plurality of contact positions corresponding to the plurality of contacts of the multi-point touch operation one by one, wherein the new control in each contact position is used for replacing a target control corresponding to the contact position.
15. The method of claim 14, wherein displaying a plurality of the new controls in a plurality of the contact locations in one-to-one correspondence with a plurality of contacts of the multi-touch operation comprises:
for each of the target controls, performing the following:
displaying a new control for replacing the target control in a contact position closest to an original position among the contact positions;
wherein the home position is a display position of the target control prior to responding to the operation.
16. The method of claim 14, wherein prior to said displaying a plurality of said new controls, the method further comprises:
acquiring functional data of a plurality of target controls and position data of a plurality of contact positions;
performing the following processing for each target control through a third neural network model: extracting functional features from the functional data and extracting position features from the position data; performing fusion processing on the functional features and the position features to obtain second fusion features; mapping the second fusion feature to a third probability that a new control corresponding to the target control is at each contact position;
And based on the third probability that the new control of each target control is positioned at each contact position, performing linear programming processing to obtain the contact position of the new control for replacing each target control.
17. The method of claim 1, wherein the opacity of the new control is positively correlated to an operating parameter corresponding to operation of the new control, wherein the operating parameter comprises at least one of: and the contact time of the operation and the contact force of the operation.
18. The method of claim 1, wherein displaying a new control for replacing the target control at the contact location of the operation with the human-machine interaction interface comprises:
displaying a new control with the target opacity in a contact position of the operation and the man-machine interaction interface; or alternatively
And displaying a new control from zero opacity to target opacity in a contact position of the operation and the human-computer interaction interface.
19. The method of claim 18, wherein in displaying a new control from zero opacity to a target opacity, the method further comprises:
Any one of the following processes is performed:
displaying a process of changing the target control from original opacity to zero opacity;
displaying the target control based on the original opacity until the new control is at the target opacity, and hiding the target control;
hiding the target control from the opacity of the new control being non-zero.
20. The method of claim 1, wherein prior to the operating the contact location with the human-machine interaction interface displaying a new control for replacing the target control, the method further comprises:
generating an ellipse by taking the longest wheelbase of the contact area of the operation and the man-machine interaction interface as a long radius and the shortest wheelbase of the contact area of the operation and the man-machine interaction interface as a short radius;
the ellipse is determined as the response area of the new control.
21. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the target control is associated with a plurality of functions, and different functions correspond to different trigger pressures;
the method further comprises the steps of:
responding to the operation of the man-machine interaction interface, and displaying a plurality of functions to be matched with the pressure of the operation;
In response to a selection operation for a target function of the plurality of functions, a pressure of the operation is identified as a trigger pressure of the target function.
22. A control processing apparatus for a virtual scene, the apparatus comprising:
the display module is used for displaying at least one control applied to the virtual scene in the human-computer interaction interface, wherein different controls are associated with different functions in the virtual scene;
the selection module is used for displaying a corresponding selection mark aiming at a selected target control in the at least one control;
the replacing module is used for responding to the operation on the human-computer interaction interface, and displaying a new control for replacing the target control at the contact position of the operation and the human-computer interaction interface; and the response area of the new control is matched with the contact area of the operation in the man-machine interaction interface, and the function associated with the new control is the same as the function of the target control replaced by the new control.
23. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor configured to implement the control processing method for a virtual scene according to any one of claims 1 to 21 when executing the executable instructions stored in the memory.
24. A computer-readable storage medium storing executable instructions that when executed by a processor implement the method of controlling a virtual scene according to any one of claims 1 to 21.
25. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the control handling method of a virtual scene according to any of claims 1 to 21.
CN202210403731.5A 2022-04-18 2022-04-18 Control processing method, device, electronic equipment, storage medium and program product Pending CN116954351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210403731.5A CN116954351A (en) 2022-04-18 2022-04-18 Control processing method, device, electronic equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210403731.5A CN116954351A (en) 2022-04-18 2022-04-18 Control processing method, device, electronic equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116954351A true CN116954351A (en) 2023-10-27

Family

ID=88460656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210403731.5A Pending CN116954351A (en) 2022-04-18 2022-04-18 Control processing method, device, electronic equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116954351A (en)

Similar Documents

Publication Publication Date Title
CN112684970B (en) Adaptive display method and device of virtual scene, electronic equipment and storage medium
CN110507992B (en) Technical support method, device, equipment and storage medium in virtual scene
CN111258486A (en) Information sharing method and device, electronic equipment and storage medium
CN114210047B (en) Object control method and device of virtual scene and electronic equipment
CN112699362A (en) Login verification method and device, electronic equipment and computer readable storage medium
CN113325955A (en) Virtual reality scene switching method, virtual reality device and readable storage medium
CN113891140A (en) Material editing method, device, equipment and storage medium
CN111862280A (en) Virtual role control method, system, medium, and electronic device
TW202217541A (en) Location adjusting method, device, equipment, storage medium, and program product for virtual buttons
CN114116086A (en) Page editing method, device, equipment and storage medium
CN116954351A (en) Control processing method, device, electronic equipment, storage medium and program product
WO2023138142A1 (en) Method and apparatus for motion processing in virtual scene, device, storage medium and program product
CN116688502A (en) Position marking method, device, equipment and storage medium in virtual scene
CN110604918B (en) Interface element adjustment method and device, storage medium and electronic equipment
CN112755510A (en) Mobile terminal cloud game control method, system and computer readable storage medium
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
US20240024778A1 (en) Updating gameplay parameters based on parameters shown in gameplay video
CN116774835B (en) Interaction method, device and storage medium in virtual environment based on VR handle
CN116366909B (en) Virtual article processing method and device, electronic equipment and storage medium
CN114247132B (en) Control processing method, device, equipment, medium and program product for virtual object
CN114344887A (en) Operation identification method, device, equipment, medium and product based on virtual scene
CN109045686B (en) Method and device for setting hunting organ in game and electronic equipment
CN117180732A (en) Prop processing method, prop processing device, electronic device, storage medium, and program product
CN115562554A (en) Interface information processing method, device, storage medium and equipment
CN117797476A (en) Interactive processing method and device for virtual scene, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40099911

Country of ref document: HK