CN117032540A - Interactive operation response method, device, computer equipment and storage medium - Google Patents

Interactive operation response method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117032540A
CN117032540A CN202311155594.9A CN202311155594A CN117032540A CN 117032540 A CN117032540 A CN 117032540A CN 202311155594 A CN202311155594 A CN 202311155594A CN 117032540 A CN117032540 A CN 117032540A
Authority
CN
China
Prior art keywords
interaction
user interface
graphical user
display area
graphical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311155594.9A
Other languages
Chinese (zh)
Inventor
蒋易恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311155594.9A priority Critical patent/CN117032540A/en
Publication of CN117032540A publication Critical patent/CN117032540A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The embodiment of the application discloses an interactive operation response method, an interactive operation response device, computer equipment and a storage medium, wherein the graphical user interface comprises at least two graphical elements by displaying the graphical user interface, and the graphical elements in the graphical user interface are divided into at least two element sets; determining a display area of each element set in the graphical user interface according to the display position of the graphical element; acquiring an interaction position of a first interaction operation acting on a graphical user interface; detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation; the first interactive operation is responded according to the target graphic element. According to the embodiment of the application, the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, and the number of the graphic elements required to be detected is reduced, so that the performance cost and the CPU load are reduced, and the running fluency of the client is improved.

Description

Interactive operation response method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to an interactive operation response method, an apparatus, a computer device, and a computer readable storage medium, where the storage medium is a computer readable storage medium.
Background
When a user performs an interactive operation with respect to a graphical user interface, it is necessary to detect a graphical element intersecting the position of the interactive operation of the user and then respond to the interactive operation of the user according to a response mechanism configured by the graphical element. The manner in which the graphical element for which the interaction is directed is determined is typically by traversing each graphical element of the graphical user interface, detecting whether the display position of the graphical element intersects the interaction position of the interaction with the interaction position of the interaction.
If the user frequently performs the interactive operation on the graphical user interface or performs multiple interactive operations simultaneously, for each interactive operation, all the graphical elements on the graphical user interface need to be traversed, which results in high performance overhead, increased CPU load, and poor operation smoothness of the client.
Disclosure of Invention
The embodiment of the application provides an interactive operation response method, an interactive operation response device, computer equipment and a storage medium, which can improve the response speed to interactive operation.
The interactive operation response method provided by the embodiment of the application comprises the following steps:
displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
determining a display area of each element set in the graphical user interface according to the display position of the graphical element;
acquiring an interaction position of a first interaction operation acting on the graphical user interface;
detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed by the first interaction operation;
responding to the first interactive operation according to the target graphic element.
Correspondingly, the embodiment of the application also provides an interactive operation response device, which comprises:
a display unit for displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
a region determining unit, configured to determine a display region of each element set in the graphical user interface according to a display position of the graphical element;
An acquisition unit configured to acquire an interaction position of a first interaction operation acting on the graphical user interface;
the detection unit is used for detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed by the first interaction operation;
and the response unit is used for responding to the first interactive operation according to the target graphic element.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory and a processor; the memory stores a computer program, and the processor is configured to run the computer program in the memory, so as to execute any one of the interactive operation response methods provided by the embodiments of the present application.
Accordingly, embodiments of the present application also provide a computer readable storage medium for storing a computer program loaded by a processor to perform any of the interactive operation response methods provided by the embodiments of the present application.
According to the embodiment of the application, the graphical user interface is displayed and comprises at least two graphical elements, and the graphical elements in the graphical user interface are divided into at least two element sets; determining a display area of each element set in the graphical user interface according to the display position of the graphical element; acquiring an interaction position of a first interaction operation acting on a graphical user interface; detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation; the first interactive operation is responded according to the target graphic element.
In the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display areas corresponding to each element set are different, so that the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, all the graphic elements in the graphic user interface are not required to be detected, the number of the graphic elements required to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an interactive operation response method provided by an embodiment of the application;
FIG. 2 is a schematic diagram of a graphical user interface provided by an embodiment of the present application;
FIG. 3 is a schematic view of a display area of a graphical user interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of element detection provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an interactive operation response device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an interactive operation response method, an interactive operation response device, computer equipment and a computer readable storage medium. The interactive operation response device may be integrated in a computer device, which may be a server or a terminal.
The terminal may include a mobile phone, a wearable intelligent device, a tablet computer, a notebook computer, a personal computer (PC, personal Computer), a car-mounted computer, and the like.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The present embodiment will be described from the viewpoint of an interactive operation response device, which may be integrated in a computer apparatus, which may be a server or a terminal, or the like.
The specific flow of the interactive operation response method provided by the embodiment of the application can be as follows, as shown in fig. 1:
101. a graphical user interface is displayed, the graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two sets of elements.
For example, a graphical user interface may be displayed with at least two graphical elements displayed thereon.
The graphical user interface (Graphics User Interface, GUI) may be obtained by executing a software application on a processor of the mobile terminal or other terminal and rendering on a display, and may be a display screen interface of the terminal device.
The Graphic element (Graphic) is a graphical element included in the graphical user interface, and the graphical element can include an element which can be seen and operated by a user in the graphical user interface, and the user can interact with the terminal equipment through the graphical element, so that the interaction requirement is met. The user graphical elements may include elements such as pictures, buttons, controls, menus, and the like.
In the interactive operation response method provided by the application, the graphic elements in the graphic user interface can be divided into a plurality of element sets, the positions of the graphic elements in each element set are similar, and the display areas of the graphic elements in different element sets on the graphic user interface are not overlapped.
The element combination may be divided in advance, for example, an element set corresponding to each graphic element in a graphic user interface preset and configured by a developer in a computer program of the client, or the graphic elements are classified by a preset code segment in the computer, so as to obtain the element combination corresponding to each graphic element.
Optionally, the graphical elements may be automatically grouped according to interface data of the graphical user interface before the graphical user interface is displayed, that is, in an embodiment, before the step of displaying the graphical user interface, the interactive operation response method provided by the embodiment of the present application may further include:
acquiring interface data of a graphical user interface, wherein the interface data comprises position information of graphical elements on the graphical user interface, and the position information is used for indicating the display positions of the graphical elements;
classifying the graphic elements according to the position information to obtain at least two element sets, wherein the display positions of the graphic elements in each element set are similar.
The interface data may be data required for displaying the graphical user interface, for example, may include information such as position information, size information, shape information, and color of the graphical element, and a display position of the graphical element in the graphical user interface may be determined according to the position information of the graphical element.
For example, when an instruction for displaying the graphical user interface triggered by the user is received, the screen size of the screen for displaying the graphical user interface is obtained, the screen is divided into at least two screen areas according to the screen size, then the corresponding screen area of each graphical element is determined according to the display position information, and the graphical elements of the graphical user interface are classified according to the graphical elements in each screen area, so that element combinations with the same number as the screen areas are obtained.
For example, the screen may be divided into 4 screen areas according to a horizontal center line and a vertical center line of the screen, and then, it is determined in which screen area the graphic element is to be displayed according to the position information of the graphic element in the graphic user interface, and 4 element combinations are obtained according to the graphic element corresponding to each screen area.
Optionally, the graphic elements can be clustered according to the position information by a clustering algorithm to obtain at least two element combinations.
Optionally, the element set may be determined according to a distance between any two graphic elements, where the distance between any two graphic elements in the element set is smaller than a preset distance threshold. For example, whether the distance between two graphic elements is smaller than a preset distance threshold value can be judged, if so, the graphic elements in the same element set are considered, and then the element set corresponding to each graphic element can be obtained; if the number of the graphic elements contained in the obtained element sets exceeds a preset number threshold, the preset distance threshold can be adjusted, and the element sets are classified according to the adjusted preset distance threshold, so that at least two element sets are obtained.
102. A display area of each element set in the graphical user interface is determined based on the display position of the graphical element.
Wherein the display area may be a partial area on a screen for displaying a graphical user interface, the display area may be considered as a bounding box of the element set.
For example, for each element set, a display position of each graphic element in the element set on the user graphic interface is obtained, and according to the determination of a display area of each element set in the graphic user interface, each display area contains all graphic elements in the corresponding element set.
In an embodiment, the display position of the graphic element and the position information and the size information according to the graphic element, the position information includes coordinates of a center, an upper left corner vertex or other reference points of the graphic element, the size information represents a display size of the graphic element on the graphic user interface, a rectangular area corresponding to each graphic element can be determined according to the position information and the size information, and the vertex coordinates on any diagonal line of the rectangular area can be used as the display position information of the graphic element to indicate the display position of the graphic position.
Ordering coordinate values of each graphic element in the element set in the horizontal direction according to the display position information aiming at each element set to obtain a coordinate minimum value and a coordinate maximum value in the horizontal direction; and the coordinate minimum value and the coordinate maximum value in the vertical direction can be obtained by the same method, so that the display area corresponding to the element set is determined.
Step 102 may be performed during or during the display of the gui, or may be performed when determining the gui for which the interaction is performed for the first time, that is, in an embodiment, step "determining, according to the display position of the gui, the display area of each element set in the gui" may specifically include:
When a second interactive operation acting on the graphical user interface is acquired, acquiring the display position of each graphical element in the graphical user interface, wherein the second interactive operation is the first interactive operation for response processing based on the image elements in the graphical user interface;
determining a graphic element aimed by the second interactive operation according to the display position, and responding to the interactive operation based on the graphic element aimed by the second interactive operation;
a display area of each element set in the graphical user interface is determined based on the display location.
Wherein the second interaction operation may be a first interaction operation that performs a response process based on the graphical element of the graphical user interface. For example, it may be the first interactive operation performed by the user with respect to the graphical user interface, or one of a plurality of interactive operations performed by the user simultaneously, and it may be considered that in this embodiment, the second interactive operation is performed in response to the first interactive operation.
For example, when the second interactive operation acting on the graphical user interface is acquired, the display position of each graphical element in the graphical user interface may be acquired, the graphical element for which the second interactive operation is directed is determined according to the interactive position of the second interactive operation and the display position of the graphical element, and the response to the second interactive operation is performed through a response mechanism configured by the graphical element.
And determining a display area corresponding to each element set according to the display position of each graphic element.
For example, as shown in fig. 2, the graphical user interface includes a plurality of graphical elements, two fingers in the figure are two interactions performed by the user at the same time, where a solid finger is a currently processed interaction, that is, a second interaction, and the second interaction is responded to and a display area corresponding to each element set is determined, which may also be referred to as a bounding box, and the determined display area may be shown in fig. 3.
After determining the bounding box corresponding to each element set, when processing the interactive operation corresponding to the dotted finger in fig. 3 and other interactive operations, whether the interactive operations intersect or not may be determined according to the display area determined in fig. 3, and a target display area intersecting the interactive operations may be determined, where the target display area may be as shown in fig. 4, and the determined target graphic element may be as shown in fig. 4, where the graphic element in the target display area is processed.
In an embodiment, the first interaction operation and the second interaction operation may be interaction operations performed by a user simultaneously, that is, the first interaction operation and the second interaction operation are included in at least two interaction operations that act on the graphical user interface simultaneously, and when the second interaction operation acting on the graphical user interface is acquired, acquiring a display position of each graphical element in the graphical user interface includes:
Receiving at least two interactive operations simultaneously acting on the graphical user interface;
screening at least the target interaction operation to obtain a second interaction operation;
a display position of each graphical element in the graphical user interface is obtained.
For example, when at least two interactions are received simultaneously, one interaction can be selected from the interactions for response processing, and the interaction is the second interaction, and one interaction can be selected from the interactions. In order to increase the speed of determining the graphic elements for which the interactive operation is aimed, the number of graphic elements detected each time may be reduced, so that the element data in each display area may be limited, in an embodiment, the number of element sets may be determined according to the number of graphic elements in the graphic user interface, that is, the step of determining the display area of each element set in the graphic user interface according to the display position of the graphic elements may specifically include:
acquiring the number of graphic elements contained in each element set;
if the number is greater than the preset threshold, dividing the graphic elements in the element set according to the display positions of the graphic elements in the element set to obtain at least two element subsets;
Updating the element set of the graphical user interface based on the at least two element subsets;
and determining the display area of each updated element set in the graphical user interface according to the display position of the graphical element.
In some scenes, the graphical user interface contains a plurality of graphical elements, if a display area contains too many graphical elements, and then the graphical elements matched with the interactive operation in the display area are determined, a plurality of graphical elements also need to be traversed, so that the number of the graphical elements contained in each element set can be obtained; if the number of the graphic elements is greater than the preset threshold value, which indicates that too many graphic areas will be displayed in the corresponding display areas, the graphic elements can be clustered according to the display positions to obtain a plurality of element subsets.
Optionally, the number of the class clusters may be determined according to the preset threshold and the number of the graphic elements, so that the number of the clusters obtained by clustering meets the requirement, and for the graphic elements which are not aggregated to any class cluster, the corresponding element set is the element set corresponding to the class cluster closest to the graphic element set.
Optionally, the distance between any two graphic elements in the element set can be calculated, a distance threshold is determined according to the average value of the maximum distance and the minimum distance in the distances, whether the distance between the two graphic elements is smaller than the distance threshold can be judged, if so, the graphic elements in the same element subset are considered, and then the element subset corresponding to each graphic element can be obtained.
And then taking the element subset as an element set, and deleting the original element set to obtain updated element sets in the graphical user interface, wherein the number of the updated element sets is larger than that of the element sets before updating.
103. An interaction location for a first interaction with a graphical user interface is obtained.
The first interactive operation may be an operation of a user interacting with the graphical user interface, the user may perform a specific action or operation through an input device such as a mouse, a keyboard, a touch screen, etc., and the interactive operation may be clicking, hovering, dragging, inputting, etc. The interaction location may include a location of an operation location of the first interaction operation on the screen, the interaction location may be represented by location information of the operation location, the location information may be indicative of the operation location, for example, the first interaction operation is a click operation, and the interaction location may include a click location of the click operation.
For example, the interaction location of the first interaction may be obtained upon detection of the first interaction acting on the graphical user interface.
104. And detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by the first interaction operation.
After the interactive position of the first interactive operation is obtained, whether the display position of the graphic element is intersected with the interactive position of the interactive operation of the user or not needs to be detected, and then the interactive operation of the user is responded according to a response mechanism configured by the graphic element.
For example, in a game interface, it is necessary to detect whether or not an input pointer (mouse or touch screen, etc.) of a user hits a graphic element, and when detecting the hit graphic element, a specific program is run. For example, when the user's finger presses the "start game" button, the program should detect this event and trigger a script program that causes the game to start.
The method for determining the graphic element aimed at by the interactive operation is generally to traverse each graphic element of the graphic user interface, detect whether the display position of the graphic element is intersected with the interactive position of the interactive operation, obtain the graphic element with the display position intersected with the interactive position of the interactive operation, select the graphic element with the display layer at the uppermost layer from the graphic elements intersected with the interactive position of the interactive operation, and respond to the interactive operation through the graphic element.
When the graphical user interface is complex, traversing all graphical elements in the graphical user interface can lead to performance degradation, CPU load rise and picture jamming.
In addition, if the user frequently performs the interactive operation on the graphical user interface or triggers multiple interactive operations at the same time, for each interactive operation, the terminal device needs to traverse all the graphical elements on the graphical user interface to determine the graphical elements intersected with the position of each interactive operation, which can result in high performance overhead, increase the CPU load, and cause poor running smoothness of the client.
In the interactive operation response method provided by the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display area corresponding to each element set is different, so that the display area corresponding to the interactive operation can be determined first, then the image elements intersecting with the interactive position of the interactive operation in the display area are detected, the number of the graphic elements to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
For example, a display area where the interaction position is located may be determined according to the interaction position and the position of the display area corresponding to each element set, where the display area is the target display area. Determining, according to a display position of a graphic element in the target display area (i.e., a graphic element in an element set corresponding to the target display area) and an interaction position of the first interaction operation, the graphic element targeted by the first interaction operation in the target display area to obtain a target graphic element, that is, in an embodiment, the step of detecting the element set in the target display area where the interaction position is located to obtain the target graphic element targeted by the first interaction operation may specifically include:
Matching is carried out according to the interaction position and the position information of the display area, and a target display area where the interaction position is located is obtained;
and detecting the graphic element corresponding to the target display area according to the interaction position to obtain the target graphic element aimed at by the first interaction operation.
For example, the position information of each display area may be acquired, whether the interaction position is in the display area is determined according to the position relationship between the interaction position and the display area, and if the interaction position is in the display area, the target display area is obtained.
And detecting the graphic elements in the target display area to obtain target graphic elements aimed at by the first interactive operation.
The at least two display areas and the interaction positions can be matched at the same time, or the at least two display areas and the interaction positions can be matched according to a preset sequence until the target display area is determined, wherein a preset sequence developer sets the at least two display areas in advance, or the display areas with a large number of graphic elements can be matched preferentially according to the number of graphic elements in each display area; optionally, the hot display area operated by the user can be determined according to the embedded point data in the background server, and the matching sequence is determined according to the operation heat of each display area, wherein the display area with higher heat is matched first.
There may be a plurality of graphic elements at the interaction location, and the target graphic element may be determined through a hierarchy of graphic elements, that is, in an embodiment, the step of detecting the graphic element in the target display area to obtain the target graphic element for which the first interaction operation is performed may specifically include:
according to the interaction position, carrying out ray projection detection on an element set corresponding to the target display area, and determining candidate graphic elements intersected with the first interaction operation;
and determining the target graphic element displayed at the uppermost layer according to the display hierarchical relationship of the candidate graphic elements.
For example, a ray may be specifically determined according to the interaction location, the intersection detection may be performed on the ray and the graphic element in the target display area, and the candidate graphic element intersected with the ray may be determined, and optionally, the ray projection detection may be performed through a GraphicRaycaster component.
And then displaying the hierarchical relationship of the candidate graphic elements, and displaying the graphic element at the uppermost layer in the candidate graphic elements to obtain the target graphic element.
And placing the candidate graphic elements into an array for sorting, discharging graphic elements displayed at the uppermost layer in the candidate graphic elements according to the z-axis depth, and obtaining a target graphic element, wherein the z-axis depth is used for representing a display level, the smaller the z-axis depth is, the more the candidate graphic elements are displayed at the upper layer, and the two candidate graphic elements A and B are assumed, wherein the z-axis depth of the candidate graphic element A is 2, the z-axis depth of the candidate graphic element B is 3, then the candidate graphic element A is displayed on the candidate graphic element B, and then the first interactive operation is operated by the candidate graphic element B. .
The user may perform a plurality of interactive operations in a short time, for example, the game player may perform a plurality of interactive operations in a short time during the game process to release game skills, and for the plurality of interactive operations performed in a short time, the graphic elements may be similar, so when determining the target image area where the interactive operation is located, the target display area of the previous interactive operation may be detected first, so as to reduce the time for detecting the target display area, that is, in an embodiment, the step of "matching according to the position information of the interactive position and the display area to obtain the target display area where the interactive position is located" may specifically include:
determining a target display area corresponding to the associated interactive operation of which the response processing time sequence is earlier than that of the first interactive operation;
matching the interaction position of the first interaction operation with a target display area corresponding to the associated interaction operation;
and if the target display areas corresponding to the associated interactive operation are not matched, matching the display areas except the target display areas corresponding to the associated interactive operation.
For example, when the first interactive operation is detected, it may be determined that the processing timing is earlier than the target display area corresponding to the interactive operation of the first interactive operation (i.e., the associated interactive operation), which may be the last interactive operation to perform the response processing.
Matching the interaction position of the first interaction operation with a target display area corresponding to the associated interaction operation; and if the target display area corresponding to the associated interaction operation is matched with the target display area corresponding to the associated interaction operation, taking the target display area corresponding to the associated interaction operation as the target display area corresponding to the first interaction operation. If the target display area corresponding to the associated interactive operation is not matched, the display areas except the target display area corresponding to the associated interactive operation are matched, and the matched display area is used as the target display area corresponding to the first interactive operation.
Alternatively, if the number of detected interactive operations within the preset time exceeds the preset operation number threshold, determining the target display area corresponding to the associated interactive operation of which the response processing time sequence is earlier than the first interactive operation.
105. The first interactive operation is responded according to the target graphic element.
For example, the first interaction may be responded according to a response mechanism corresponding to the target graphic element.
As can be seen from the above, in the embodiment of the present application, by displaying a graphical user interface, the graphical user interface includes at least two graphical elements, and the graphical elements in the graphical user interface are divided into at least two element sets; determining a display area of each element set in the graphical user interface according to the display position of the graphical element; acquiring an interaction position of a first interaction operation acting on a graphical user interface; detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation; the first interactive operation is responded according to the target graphic element.
The same can be said to be responsive to the interactive operation for the other graphical user interface, which can be a different user interface than the currently displayed graphical user interface, by means of steps 101-105 when the terminal device displays the other graphical user interface.
In some scenarios, the terminal device may continuously refresh the display page, and the graphical user interface may refer to a graphical user interface displayed corresponding to each frame that is refreshed.
In the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display areas corresponding to each element set are different, so that the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, all the graphic elements in the graphic user interface are not required to be detected, the number of the graphic elements required to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
In order to facilitate better implementation of the interactive operation response method provided by the embodiment of the application, an interactive operation response device is also provided in an embodiment. Where the meaning of a noun is the same as in the interactive response method described above, specific implementation details may be referred to the description in the method embodiment.
The interactive operation response device may be integrated in a computer apparatus, as shown in fig. 5, and may include: a display unit 301, a region determination unit 302, an acquisition unit 303, a detection unit 304, and a response unit 305 are specifically as follows:
(1) A display unit 301 for displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two sets of elements.
(2) The area determining unit 302 is configured to determine a display area of each element set in the graphical user interface according to a display position of the graphical element.
In an embodiment, the area determining unit 302 includes:
a position acquisition subunit, configured to acquire, when a second interaction operation acting on the graphical user interface is acquired, a display position of each graphical element in the graphical user interface, where the second interaction operation is a first interaction operation that performs response processing based on the image elements in the graphical user interface;
an operation response subunit, configured to determine, according to the display position, a graphic element for which the second interaction operation is directed, and respond to the interaction operation based on the graphic element for which the second interaction operation is directed;
And the display area determining subunit is used for determining the display area of each element set in the graphical user interface according to the display position.
In an embodiment, the area determining unit 302 includes:
a number subunit, configured to obtain the number of graphic elements included in each element set;
dividing sub-units, configured to divide the graphic elements in the element set according to the display positions of the graphic elements in the element set if the number is greater than a preset threshold value, so as to obtain at least two element subsets;
an updating subunit configured to update the element set of the graphical user interface based on at least two element subsets;
and the display area determining subunit is used for determining the display area of each updated element set in the graphical user interface according to the display position of the graphical element.
(3) An acquisition unit 303 for acquiring an interaction location of a first interaction operation acting on the graphical user interface.
(4) And the detection unit 304 is configured to detect an element set in a target display area where the interaction position is located, so as to obtain a target graphic element for which the first interaction operation is directed.
In one embodiment, the detection unit 304 includes:
the matching subunit is used for matching according to the interaction position and the position information of the display area to obtain a target display area where the interaction position is located;
And the element detection subunit is used for detecting the graphic element corresponding to the target display area according to the interaction position to obtain the target graphic element aimed at by the first interaction operation.
In one embodiment, the matching subunit comprises:
the association operation determining module is used for determining a target display area corresponding to the association interaction operation of which the response processing time sequence is earlier than that of the first interaction operation;
the first region matching module is used for matching the interaction position of the first interaction operation with a target display region corresponding to the associated interaction operation;
and the second region matching module is used for matching the display regions except the target display regions corresponding to the associated interactive operation if the target display regions corresponding to the associated interactive operation are not matched.
In one embodiment, the element detection subunit comprises:
the projection detection module is used for carrying out ray projection detection on the element set corresponding to the target display area according to the interaction position and determining candidate graphic elements intersected with the first interaction operation;
and the element determining module is used for determining the target graphic element displayed at the uppermost layer according to the display hierarchical relationship of the candidate graphic elements.
(5) And a response unit 305, configured to respond to the first interaction according to the target graphic element.
In an embodiment, the method for responding to an interaction operation provided by the embodiment of the application may further include:
the data acquisition unit is used for acquiring interface data of the graphical user interface, wherein the interface data comprises position information of the graphical elements on the graphical user interface, and the position information is used for indicating the display positions of the graphical elements;
the classifying unit is used for classifying the graphic elements according to the position information to obtain at least two element sets, wherein the display positions of the graphic elements in each element set are similar.
As can be seen from the above, the interactive operation response device according to the embodiment of the present application displays a graphical user interface through the display unit 301, wherein the graphical user interface includes at least two graphical elements, and the graphical elements in the graphical user interface are divided into at least two element sets; the area determination unit 302 determines a display area of each element set in the graphical user interface according to the display position of the graphical element; the acquisition unit 303 acquires an interaction location of a first interaction operation acting on the graphical user interface; the detection unit 304 detects an element set in a target display area where the interaction position is located, and a target graphic element aimed by the first interaction operation is obtained; the response unit 305 responds to the first interactive operation according to the target graphic element.
In the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display areas corresponding to each element set are different, so that the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, all the graphic elements in the graphic user interface are not required to be detected, the number of the graphic elements required to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
Correspondingly, the embodiment of the application also provides computer equipment which can be a terminal. Fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, and performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the computer device 500.
In the embodiment of the present application, the processor 501 in the computer device 500 loads the instructions corresponding to the processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions:
displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
determining a display area of each element set in the graphical user interface according to the display position of the graphical element;
acquiring an interaction position of a first interaction operation acting on a graphical user interface;
detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation;
The first interactive operation is responded according to the target graphic element.
In an embodiment, the step of determining a display area of each element set in the graphical user interface according to the display position of the graphical element may include:
when a second interactive operation acting on the graphical user interface is acquired, acquiring the display position of each graphical element in the graphical user interface, wherein the second interactive operation is the first interactive operation for response processing based on the image elements in the graphical user interface;
determining a graphic element aimed by the second interactive operation according to the display position, and responding to the interactive operation based on the graphic element aimed by the second interactive operation;
a display area of each element set in the graphical user interface is determined based on the display location.
In an embodiment, the step of determining a display area of each element set in the graphical user interface according to the display position of the graphical element may include:
acquiring the number of graphic elements contained in each element set;
if the number is greater than the preset threshold, dividing the graphic elements in the element set according to the display positions of the graphic elements in the element set to obtain at least two element subsets;
Updating the element set of the graphical user interface based on the at least two element subsets;
and determining the display area of each updated element set in the graphical user interface according to the display position of the graphical element.
In an embodiment, the step of detecting the element set in the target display area where the interaction location is located to obtain the target graphic element for which the first interaction operation is directed may include:
matching is carried out according to the interaction position and the position information of the display area, and a target display area where the interaction position is located is obtained;
and detecting the graphic element corresponding to the target display area according to the interaction position to obtain the target graphic element aimed at by the first interaction operation.
In an embodiment, the step of "matching according to the location information of the interaction location and the display area to obtain the target display area where the interaction location is located" may include:
determining a target display area corresponding to the associated interactive operation of which the response processing time sequence is earlier than that of the first interactive operation;
matching the interaction position of the first interaction operation with a target display area corresponding to the associated interaction operation;
and if the target display areas corresponding to the associated interactive operation are not matched, matching the display areas except the target display areas corresponding to the associated interactive operation.
In an embodiment, the step of detecting the graphic element in the target display area to obtain the target graphic element for which the first interaction operation is performed may include:
according to the interaction position, carrying out ray projection detection on an element set corresponding to the target display area, and determining candidate graphic elements intersected with the first interaction operation;
and determining the target graphic element displayed at the uppermost layer according to the display hierarchical relationship of the candidate graphic elements.
In an embodiment, before the step of displaying the graphical user interface, the method for responding to the interactive operation provided by the embodiment of the application may include:
acquiring interface data of a graphical user interface, wherein the interface data comprises position information of graphical elements on the graphical user interface, and the position information is used for indicating the display positions of the graphical elements;
classifying the graphic elements according to the position information to obtain at least two element sets, wherein the display positions of the graphic elements in each element set are similar.
As can be seen from the above, in the embodiment of the present application, by displaying a graphical user interface, the graphical user interface includes at least two graphical elements, and the graphical elements in the graphical user interface are divided into at least two element sets; determining a display area of each element set in the graphical user interface according to the display position of the graphical element; acquiring an interaction position of a first interaction operation acting on a graphical user interface; detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation; the first interactive operation is responded according to the target graphic element.
In the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display areas corresponding to each element set are different, so that the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, all the graphic elements in the graphic user interface are not required to be detected, the number of the graphic elements required to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 6, the computer device 500 further includes: a touch display screen 503, a radio frequency circuit 504, an audio circuit 505, an input unit 506, and a power supply 507. The processor 501 is electrically connected to the touch display 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 6 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display screen 503 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 501, and can receive commands from the processor 501 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 501 to determine the type of touch event, and the processor 501 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch sensitive display 503 may also implement an input function as part of the input unit 506.
The radio frequency circuitry 504 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 505 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 505 and converted into audio data, which are processed by the audio data output processor 501 for transmission to, for example, another computer device via the radio frequency circuit 504, or which are output to the memory 502 for further processing. The audio circuit 505 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Alternatively, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 507 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 6, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform the steps in any of the virtual article marking methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
determining a display area of each element set in the graphical user interface according to the display position of the graphical element;
acquiring an interaction position of a first interaction operation acting on a graphical user interface;
detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation;
the first interactive operation is responded according to the target graphic element.
In an embodiment, the step of determining a display area of each element set in the graphical user interface according to the display position of the graphical element may include:
when a second interactive operation acting on the graphical user interface is acquired, acquiring the display position of each graphical element in the graphical user interface, wherein the second interactive operation is the first interactive operation for response processing based on the image elements in the graphical user interface;
determining a graphic element aimed by the second interactive operation according to the display position, and responding to the interactive operation based on the graphic element aimed by the second interactive operation;
A display area of each element set in the graphical user interface is determined based on the display location.
In an embodiment, the step of determining a display area of each element set in the graphical user interface according to the display position of the graphical element may include:
acquiring the number of graphic elements contained in each element set;
if the number is greater than the preset threshold, dividing the graphic elements in the element set according to the display positions of the graphic elements in the element set to obtain at least two element subsets;
updating the element set of the graphical user interface based on the at least two element subsets;
and determining the display area of each updated element set in the graphical user interface according to the display position of the graphical element.
In an embodiment, the step of detecting the element set in the target display area where the interaction location is located to obtain the target graphic element for which the first interaction operation is directed may include:
matching is carried out according to the interaction position and the position information of the display area, and a target display area where the interaction position is located is obtained;
and detecting the graphic element corresponding to the target display area according to the interaction position to obtain the target graphic element aimed at by the first interaction operation.
In an embodiment, the step of "matching according to the location information of the interaction location and the display area to obtain the target display area where the interaction location is located" may include:
determining a target display area corresponding to the associated interactive operation of which the response processing time sequence is earlier than that of the first interactive operation;
matching the interaction position of the first interaction operation with a target display area corresponding to the associated interaction operation;
and if the target display areas corresponding to the associated interactive operation are not matched, matching the display areas except the target display areas corresponding to the associated interactive operation.
In an embodiment, the step of detecting the graphic element in the target display area to obtain the target graphic element for which the first interaction operation is performed may include:
according to the interaction position, carrying out ray projection detection on an element set corresponding to the target display area, and determining candidate graphic elements intersected with the first interaction operation;
and determining the target graphic element displayed at the uppermost layer according to the display hierarchical relationship of the candidate graphic elements. In an embodiment, before the step of displaying the graphical user interface, the method for responding to the interactive operation provided by the embodiment of the application may include:
Acquiring interface data of a graphical user interface, wherein the interface data comprises position information of graphical elements on the graphical user interface, and the position information is used for indicating the display positions of the graphical elements;
classifying the graphic elements according to the position information to obtain at least two element sets, wherein the display positions of the graphic elements in each element set are similar.
As can be seen from the above, in the embodiment of the present application, by displaying a graphical user interface, the graphical user interface includes at least two graphical elements, and the graphical elements in the graphical user interface are divided into at least two element sets; determining a display area of each element set in the graphical user interface according to the display position of the graphical element; acquiring an interaction position of a first interaction operation acting on a graphical user interface; detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed at by a first interaction operation; the first interactive operation is responded according to the target graphic element.
In the embodiment of the application, the graphic elements in the graphic user interface are divided into at least two element sets, and the display areas corresponding to each element set are different, so that the display area corresponding to the interactive operation can be determined first, then the image elements aiming at the interactive operation in the display area are detected, all the graphic elements in the graphic user interface are not required to be detected, the number of the graphic elements required to be detected is reduced, the performance cost and the CPU load are reduced, and the running fluency of the client is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The foregoing has described in detail the methods, apparatuses, computer devices and computer storage media of the present application for providing an interactive response method, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in the understanding of the methods and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. An interactive operation response method, comprising:
displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
determining a display area of each element set in the graphical user interface according to the display position of the graphical element;
Acquiring an interaction position of a first interaction operation acting on the graphical user interface;
detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed by the first interaction operation;
responding to the first interactive operation according to the target graphic element.
2. The method of claim 1, wherein determining a display area of each element set in the graphical user interface based on a display position of the graphical element comprises:
when a second interaction operation acting on the graphical user interface is acquired, acquiring a display position of each graphical element in the graphical user interface, wherein the second interaction operation is the first interaction operation for response processing based on the image elements in the graphical user interface;
determining a graphic element aimed by the second interactive operation according to the display position, and responding to the interactive operation based on the graphic element aimed by the second interactive operation;
and determining the display area of each element set in the graphical user interface according to the display position.
3. The method of claim 1, wherein determining a display area of each element set in the graphical user interface based on a display position of the graphical element comprises:
Acquiring the number of graphic elements contained in each element set;
if the number is greater than a preset threshold, dividing the graphic elements in the element set according to the display positions of the graphic elements in the element set to obtain at least two element subsets;
updating the element set of the graphical user interface based on the at least two element subsets;
and determining the display area of each updated element set in the graphical user interface according to the display position of the graphical element.
4. The method according to claim 1, wherein detecting the element set in the target display area where the interaction location is located to obtain the target graphic element for which the first interaction operation is directed includes:
matching is carried out according to the interaction position and the position information of the display area, and a target display area where the interaction position is located is obtained;
and detecting the graphic element corresponding to the target display area according to the interaction position to obtain the target graphic element aimed at by the first interaction operation.
5. The method of claim 4, wherein the matching according to the interaction location and the location information of the display area to obtain the target display area where the interaction location is located includes:
Determining a target display area corresponding to the associated interactive operation of which the response processing time sequence is earlier than that of the first interactive operation;
matching the interaction position of the first interaction operation with a target display area corresponding to the associated interaction operation;
and if the target display areas corresponding to the associated interaction operations are not matched, matching the display areas except the target display areas corresponding to the associated interaction operations.
6. The method according to claim 4, wherein the detecting the graphic element in the target display area to obtain the target graphic element for which the first interaction is directed includes:
performing ray projection detection on the element set corresponding to the target display area according to the interaction position, and determining candidate graphic elements intersected with the first interaction operation;
and determining the target graphic element displayed at the uppermost layer according to the display hierarchical relationship of the candidate graphic elements.
7. The method of any of claims 1-6, wherein prior to the displaying the graphical user interface, the method further comprises:
acquiring interface data of the graphical user interface, wherein the interface data comprises position information of graphical elements on the graphical user interface, and the position information is used for indicating the display positions of the graphical elements;
Classifying the graphic elements according to the position information to obtain at least two element sets, wherein the display positions of the graphic elements in each element set are similar.
8. An interactive operation response device, comprising:
a display unit for displaying a graphical user interface comprising at least two graphical elements, the graphical elements in the graphical user interface being divided into at least two element sets;
a region determining unit, configured to determine a display region of each element set in the graphical user interface according to a display position of the graphical element;
an acquisition unit configured to acquire an interaction position of a first interaction operation acting on the graphical user interface;
the detection unit is used for detecting an element set in a target display area where the interaction position is located to obtain a target graphic element aimed by the first interaction operation;
and the response unit is used for responding to the first interactive operation according to the target graphic element.
9. A computer device comprising a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program in the memory to perform the interactive operation response method according to any one of claims 1 to 7.
10. A computer readable storage medium for storing a computer program, the computer program being loaded by a processor to perform the interoperation response method of any of claims 1 to 7.
CN202311155594.9A 2023-09-07 2023-09-07 Interactive operation response method, device, computer equipment and storage medium Pending CN117032540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311155594.9A CN117032540A (en) 2023-09-07 2023-09-07 Interactive operation response method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311155594.9A CN117032540A (en) 2023-09-07 2023-09-07 Interactive operation response method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117032540A true CN117032540A (en) 2023-11-10

Family

ID=88633825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311155594.9A Pending CN117032540A (en) 2023-09-07 2023-09-07 Interactive operation response method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117032540A (en)

Similar Documents

Publication Publication Date Title
US20170347153A1 (en) Method of zooming video images and mobile terminal
US20220317862A1 (en) Icon moving method and electronic device
KR20220092937A (en) Screen display control method and electronic device
US11165950B2 (en) Method and apparatus for shooting video, and storage medium
CN113546419B (en) Game map display method, game map display device, terminal and storage medium
CN108984142B (en) Split screen display method and device, storage medium and electronic equipment
CN109901761A (en) A kind of content display method and mobile terminal
CN113332719B (en) Virtual article marking method, device, terminal and storage medium
CN114419229A (en) Image rendering method and device, computer equipment and storage medium
CN113332726A (en) Virtual character processing method and device, electronic equipment and storage medium
CN116542740A (en) Live broadcasting room commodity recommendation method and device, electronic equipment and readable storage medium
CN112799754B (en) Information processing method, information processing device, storage medium and computer equipment
CN113332718B (en) Interactive element query method and device, electronic equipment and storage medium
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN117032540A (en) Interactive operation response method, device, computer equipment and storage medium
CN113780291A (en) Image processing method and device, electronic equipment and storage medium
CN108829600B (en) Method and device for testing algorithm library, storage medium and electronic equipment
CN112783386A (en) Page jump method, device, storage medium and computer equipment
CN112817768B (en) Animation processing method, device, equipment and computer readable storage medium
CN114416234B (en) Page switching method and device, computer equipment and storage medium
CN110661919B (en) Multi-user display method, device, electronic equipment and storage medium
CN114489858B (en) Application software information setting method and device, terminal equipment and storage medium
CN113905280B (en) Barrage information display method, device, equipment and storage medium
CN117408776A (en) Virtual resource display method and device, computer equipment and storage medium
CN114146410A (en) Visual field control method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination