CN116099198A - Virtual object control method and device and electronic terminal - Google Patents

Virtual object control method and device and electronic terminal Download PDF

Info

Publication number
CN116099198A
CN116099198A CN202310004780.6A CN202310004780A CN116099198A CN 116099198 A CN116099198 A CN 116099198A CN 202310004780 A CN202310004780 A CN 202310004780A CN 116099198 A CN116099198 A CN 116099198A
Authority
CN
China
Prior art keywords
virtual object
area
action
virtual
action area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310004780.6A
Other languages
Chinese (zh)
Inventor
王辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310004780.6A priority Critical patent/CN116099198A/en
Publication of CN116099198A publication Critical patent/CN116099198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a control method and device for a virtual object and an electronic terminal, relates to the technical field of games, and solves the technical problem that a user has low placement efficiency on a model in a virtual scene. The method comprises the following steps: in the case that the first action area and the second action area overlap, determining a throwing direction corresponding to a quick throwing operation for the first virtual object in response to the quick throwing operation; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area; controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.

Description

Virtual object control method and device and electronic terminal
Technical Field
The disclosure relates to the technical field of games, and in particular relates to a virtual object control method, a virtual object control device and an electronic terminal.
Background
Currently, in many games, a user may put virtual objects in a virtual scene. For example, many games have custom scene adornment functions, and a common placement operation is for a user to click on an item to be placed and drag to a target location.
However, if multiple virtual objects are required to be placed next to each other, the user is likely not to be able to place quickly and accurately. Especially, visual confusion is easily caused when the size of the virtual object is different from the actual occupation, so that a user cannot accurately identify the actual occupation of the virtual object, for example, a model of an object is small, but occupies a larger position of four lattices in a virtual scene, and the user cannot quickly and accurately place the model, so that the placement efficiency of the user on the model in the virtual scene is lower.
Disclosure of Invention
The invention aims to provide a control method and device for a virtual object and an electronic terminal, so as to relieve the technical problem that a user has low placement efficiency on a model in a virtual scene.
In a first aspect, an embodiment of the present disclosure provides a control method for a virtual object, where a graphical user interface is provided by a terminal device, where at least part of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a virtual object and an area for placing the virtual object, and the virtual object includes a first virtual object that is not placed in the area and a second virtual object that is already placed in the area; the method comprises the following steps:
In response to a fast throw operation (fling) for the first virtual object, determining a throwing direction corresponding to the fast throw operation with the first region of action overlapping the second region of action; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
In a second aspect, a control device for a virtual object is provided, and a graphical user interface is provided through a terminal device, wherein at least part of a virtual scene is displayed in the graphical user interface, the virtual scene comprises a virtual object and an area for placing the virtual object, and the virtual object comprises a first virtual object which is not placed in the area and a second virtual object which is placed in the area; the device comprises:
a determining module, configured to determine, in response to a fast throw operation for the first virtual object, a throwing direction corresponding to the fast throw operation in a case where a first action area overlaps a second action area; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
The control module is used for controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
In a third aspect, an embodiment of the present disclosure further provides an electronic terminal, including a memory, and a processor, where the memory stores a computer program that can be executed on the processor, and the processor executes the method according to the first aspect.
In a fourth aspect, embodiments of the present disclosure further provide a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to perform the method of the first aspect described above.
The embodiment of the disclosure brings the following beneficial effects:
according to the control method, the device and the electronic terminal for the virtual object, when the first virtual object which is not placed in the area is overlapped with the action area which is placed in the area by the second virtual object which is placed in the area, the throwing direction corresponding to the quick throwing operation is determined in response to the quick throwing operation of the first virtual object, the first virtual object is controlled to be placed in the area which is in the throwing direction and is nearest to the second action area, and when the placed virtual object is overlapped with the non-placed virtual object, the non-placed virtual object is controlled to be placed in the throwing direction of the user quick throwing operation and is nearest to the placed virtual object, the effect that the model adsorption effect can be achieved through the operation of the user is achieved when the placed object is in the occupation overlapping is achieved, the direction of the user operation is also met, and even if the size of the virtual object is different from the actual occupation, the user cannot accurately identify the virtual object to be in the actual occupation, the user can quickly, accurately and efficiently complete the placement of the adjacent virtual object, the problem of the user model in the placement scene is solved, and the placement efficiency of the user model is low compared with the user model is solved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 illustrates an application scenario schematic provided by an embodiment of the present disclosure;
fig. 2 shows a schematic structural diagram of a mobile phone according to an embodiment of the disclosure;
fig. 3 illustrates a schematic usage scenario of a touch terminal provided by an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for controlling a virtual object according to an embodiment of the present disclosure;
FIG. 5 shows a display schematic of a graphical user interface provided by an embodiment of the present disclosure;
FIG. 6 illustrates a display schematic of another graphical user interface provided by an embodiment of the present disclosure;
FIG. 7 illustrates a display schematic of another graphical user interface provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present disclosure;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the present disclosure will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terms "comprising" and "having" and any variations thereof, as referred to in the embodiments of the disclosure, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
At present, many games have a function of customizing scene decoration, and a common placement operation is that a user clicks an object to be placed and drags the object to a target position. For example, in a game, a user can perform a drag operation of custom scene decoration, that is, an operation on an item in drag editing, and of course, an original item may also exist in the scene.
However, in the existing operation of drag-and-decoration of a custom scene, when the size of an object model is different from the actual occupation space, for example, one object model is very small but occupies a position of four lattices in the scene, if a plurality of objects are required to be adjacently placed, the objects may not be quickly and accurately placed. For example, the overlapping portion between the object in the drag editing and the original object may be caused by the actual occupation, and the manual adjustment is required by the user, and the accurate and rapid adjustment is difficult for the user due to the confusion caused by the model size.
In the existing solutions, the existence of the overlapping is prompted by text, for example, the red color blocks around the model are combined with text to prompt the existence of the overlapping, so that the user is guided to perform manual adjustment, which enhances the promptness, but the operation of the user is not simplified and optimized to improve the operation efficiency.
The existing solutions are that when the action area of the articles to be placed approaches to a certain value in the arrangement scene in the dragging process of the user, the action area of the articles to be placed can be adsorbed, so that the two articles can reach the purposes of accuracy and adjacency. Although the technical problem can be relieved to a certain extent by the scheme, the object is required to be moved to the vicinity of the object action area needing to be adjacent, and then the object is placed by adsorption, if the problem of smaller action area occurs, the probability of secondary adjustment is higher.
Therefore, at present, the user cannot quickly and accurately place the model in the virtual scene, so that the placement efficiency of the user on the model in the virtual scene is low.
Based on the above, the embodiment of the disclosure provides a control method and device for a virtual object and an electronic terminal, by which the technical problem that a user has low placement efficiency on a model in a virtual scene can be solved.
The control method of the virtual object in one embodiment of the present disclosure may be executed on a local terminal device or a server. When the control method of the virtual object runs on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the control method of the virtual object are completed on the cloud game server, and the function of the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the user operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with a user through a graphical user interface, namely, conventionally downloading and installing a game program through the electronic device and running the game program. The way in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the disclosure provides a control method of a virtual object, and a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system.
For example, as shown in fig. 1, fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure. The application scenario may include a touch terminal (e.g., a mobile phone 102) and a server 101, and the touch terminal may communicate with the server 101 through a wired network or a wireless network. The touch terminal is used for running a virtual desktop, and through the virtual desktop, interaction with the server 101 can be performed, so that control over content in the server 101 is achieved.
The touch terminal of the present embodiment is illustrated by taking the mobile phone 102 as an example. The handset 102 includes Radio Frequency (RF) circuitry 110, memory 120, touch screen 130, processor 140, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 2 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. Those skilled in the art will appreciate that the touch screen 130 pertains to a User Interface (UI) and that the handset 102 may include fewer User interfaces than shown or otherwise.
RF circuitry 110 may also communicate with networks and other devices through wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (Global System of Mobile communication, GSM for short), general packet radio service (General Packet Radio Service, GPRS for short), code division multiple access (Code Division Multiple Access, CDMA for short), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA for short), long term evolution (Long Term Evolution, LTE for short), email, short message service (Short Messaging Service, SMS for short), and the like.
The memory 120 may be used to store software programs and modules that the processor 140 executes to perform various functional applications and data processing of the handset 102 by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the cell phone 102, etc. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The touch screen 130 may be used to display a graphical user interface and to receive user operations with respect to the graphical user interface. A particular touch screen 130 may include a display panel and a touch panel. The display panel may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may collect contact or non-contact operations on or near the user (e.g., operations on or near the touch panel by the user using any suitable object or accessory such as a finger 103, a stylus, etc., as shown in fig. 3) and generate preset operation instructions. In addition, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth and the touch gesture of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into information which can be processed by the processor, sends the information to the processor 140, and can receive and execute commands sent by the processor 140. In addition, the touch panel may be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave, or may be implemented by any technology developed in the future. Further, the touch panel may overlay the display panel, and a user may operate on or near the touch panel overlaid on the display panel according to a graphical user interface displayed by the display panel, and upon detection of an operation thereon or thereabout, the touch panel is transferred to the processor 140 to determine a user input, and the processor 140 then provides a corresponding visual output on the display panel in response to the user input. In addition, the touch panel and the display panel may be implemented as two independent components or may be integrated.
The processor 140 is a control center of the mobile phone 102, and uses various interfaces and lines to connect various parts of the entire mobile phone, and by running or executing software programs and/or modules stored in the memory 120, and invoking data stored in the memory 120, performs various functions of the mobile phone 102 and processes the data, thereby performing overall monitoring of the mobile phone.
Embodiments of the present disclosure are further described below with reference to the accompanying drawings.
Fig. 4 is a flowchart illustrating a method for controlling a virtual object according to an embodiment of the present disclosure.
The method can be applied to a terminal device (such as the mobile phone 102 shown in fig. 2) capable of presenting a graphical user interface, the graphical user interface is provided through the terminal device, at least part of virtual scenes are displayed in the graphical user interface, the virtual scenes comprise virtual objects and areas for placing the virtual objects, and the virtual objects comprise first virtual objects which are not placed in the areas and second virtual objects which are placed in the areas. As shown in fig. 4, the method includes:
in step S410, in a case where the first action area overlaps the second action area, in response to the fast throw operation for the first virtual object, a throwing direction corresponding to the fast throw operation is determined.
The first action area is an action area where the first virtual object currently corresponds to the action area in the area, and the second action area is an action area where the second virtual object is placed in the area.
In practical applications, the virtual object in the embodiments of the present disclosure may be any object that a user may put in a virtual scene, for example, a building, a plant, furniture, a decoration, and the like.
It should be noted that the quick-throw operation (flick) in the embodiment of the present disclosure is a sliding gesture with a sliding direction. The operation mode of the quick-throw operation may be any sliding operation in the belt direction, for example, the quick-throw operation may be a sliding operation in which the touch pressure in the throwing direction is from large to small until the touch is ended, and for example, a sliding operation in which the touch pressure in the throwing direction is kept unchanged and the touch is ended last.
In an alternative embodiment, the virtual object may be a relatively large three-dimensional model in the virtual scene, or may be a relatively small three-dimensional model in the virtual scene. The virtual object may be of any shape and structure. The content presented by the graphical user interface may comprise all of the virtual object or may be part of the virtual object. The virtual object may be displayed in an upper left, upper right, or other location in the graphical user interface, the exemplary embodiment is not limiting.
For the case where the first action area overlaps with the second action area, exemplary, as shown in fig. 5, the second action area (A1) corresponding to the Item1 (the second virtual object) occupies 25 pieces, the first action area (A2) corresponding to the Item2 (the first virtual object) occupies 25 pieces, the user can move the Item2 by dragging the drags, and in fig. 5 the user moves the Item2 to a position where at least part (5 pieces) of the corresponding first action area (A2) overlaps with the second action area (A1).
In step S420, the first virtual object is controlled to be placed in the target area.
Wherein the target area is an area in the throwing direction and most adjacent to the second action area.
In practical application, when the first action area and the second action area overlap, whether the user performs quick throw operation can be recorded, when the user performs quick throw operation, the quick throw direction F1 of the user, namely the action direction of placing the article by the user, can be recorded, and the current article is moved to a position adjacent to the corresponding direction of the action area of the placed article, so that the effect of rapidly and accurately placing the article can be achieved.
It should be noted that, the target area may be a position along the direction F1 and adjacent to the action area of the placed article, that is, the article is placed at a position directly adjacent to the placed article in the direction F1, so as to ensure that the placement direction of the article conforms to the direction F1 operated by the user, and that the article placed by the user and the placed article will not overlap.
Through using interactive gesture, namely taking the quick throwing operation of throwing direction to reach the model adsorption effect that accords with interactive gesture, the user has the occupation when overlapping when placing the article, can accomplish the operation of placing the article through simple interactive gesture, promotes the efficiency that article was put in the custom scene.
According to the embodiment of the disclosure, when the placed virtual object and the unset virtual object are overlapped, the unset virtual object is controlled to be placed in the area which is closest to the placed virtual object along the throwing direction of the quick throwing operation of the user, so that the model adsorption effect can be achieved through the operation of the user when the placed object is overlapped in a occupying position, the direction of the user operation is met, and even if the size of the virtual object is different from the actual occupying position, the user can quickly, accurately and efficiently complete the placement operation of the adjacent model under the condition that the user cannot accurately identify the actual occupying position of the virtual object, and the placement efficiency of the user to the model in the virtual scene is improved.
The above steps are described in detail below.
In some embodiments, the user may be prompted to overlap when the occurrence of the overlap is detected, and the user may be timely reminded of the occurrence of the overlap. As an example, before step S410, the method may further include the steps of:
Step a), judging whether the first action area and the second action area are overlapped or not; if overlap occurs, step b) is performed.
And b), displaying the overlapped prompt information between the first action area and the second action area in the graphical user interface.
For example, if the overlapping has occurred, the user enters a new state S2 from the normal drag state S1, and the user can learn that the overlapping has occurred through the feedback of the prompt information, and can also perform gesture operation, namely, quick throwing operation with throwing direction in the S2 state
For example, as shown in fig. 5, 5 cells overlap between the first action area (A2) corresponding to the first virtual object and the second action area (A1) corresponding to the second virtual object, and the overlapping of the two cells is presented by projecting the overlapping 5 cells. By prompting the user of the overlapping condition when the overlapping is monitored, the user can be timely reminded of the overlapping condition.
Based on the steps a) and b), when an article is dragged, whether the current dragging position of the article overlaps with the existing article can be detected in real time, so that the overlapping condition can be detected in time. As an example, the above step a) may include the steps of:
And c), responding to the drag operation aiming at the first virtual object, controlling the first virtual object to move along with the drag operation, and determining a first acting area to which the first virtual object moves currently.
And d), judging whether the second action area is overlapped with the first action area to which the first virtual object is currently moved.
For example, as shown in fig. 6, item1 (second virtual object) that has been placed in the area corresponds to the second action area (A1), item2 (first virtual object) that has not been placed in the area corresponds to the first action area (A2), and the user can move Item2 following the Drag operation by dragging the Drag operation at an arbitrary position of control Item 2.
As an alternative embodiment, when the user starts the drag operation, the normal drag state S1 is entered, and the state S1 may be used to detect in real time whether the action area A1 of the currently dragged object overlaps with the action area A2 of the placed object. And judging whether to enter a state S2 according to whether the overlapping occurs, if the overlapping does not occur, the user can normally continue the dragging operation, and the user can place the object through accurate dragging.
The blocking condition of the operation is detected and judged in the operation of placing the article, namely, whether the current dragging position of the article overlaps with the existing article or not can be detected in real time when a certain article is dragged, and the occurrence of the overlapping condition can be monitored more timely.
Based on the step a) and the step b), the non-directional placing operation can be directly performed when the placing is carried out under the condition that no overlapping is detected, so that the operation cost of a user is saved. As an example, after step a), the method may further comprise the steps of:
and e), if no overlap occurs between the first action area and the second action area, controlling the first virtual object to be placed in the first action area corresponding to the moment of the placement operation in response to the placement operation for the first virtual object.
For example, in fig. 6, no overlap occurs between the first action area (A2) corresponding to the first virtual object and the second action area (A1) corresponding to the second virtual object, and if the user performs the placing operation at this time, the first virtual object is placed at a position in the current first action area (A2).
As an alternative embodiment, when the user starts the drag operation, the drag operation is performed normally when the user enters the normal drag state S1 and the action area of the currently dragged object in the state S1 is not overlapped with the action area of the placed object, that is, the user can perform the placement by accurate drag.
In the embodiment of the disclosure, under the condition that no overlapping is detected, the unidirectional placement operation can be directly performed during placement, so that the operation cost of a user is saved.
Of course, when the first virtual object and the second virtual object overlap, the first virtual object may not be placed by the fast-projection operation, but the user may continue to change the placement position of the first virtual object by continuing the operation in the above step c) to avoid the first virtual object and the second virtual object from overlapping. As an example, after the step d), further comprising:
if overlapping occurs, continuing to respond to the drag operation aiming at the first virtual object, and controlling the first virtual object to move along with the drag operation;
and in the case that no overlap occurs between the second action region and the first action region, controlling the first virtual object to be placed in the first action region corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In the embodiment of the present disclosure, the user may place the first virtual object through a fast-projection operation to avoid overlapping of the first virtual object and the second virtual object, and the user may also continue to change the placement position of the first virtual object through the continuation of the drag operation in step c) above. However, the first virtual object is placed through the fast-projection operation, so that the first virtual object and the second virtual object can be prevented from being overlapped, the user operation is more convenient, the first virtual object can be placed at a more proper position only through the one-time fast-projection operation, and complicated adjustment and change of the placement position of the first virtual object through the dragging operation of various directions by the user are saved. Therefore, the fast-casting operation provided by the embodiment of the disclosure can enable the reasonable position placement of the first virtual object to be more convenient, quick and convenient.
In some embodiments, the throwing direction of the quick throwing operation can be embodied by the operation of touch pressure from large to small, so that the user can conveniently operate the operation effect in the belt direction. As one example, the quick-throw operation includes an operation in which the touch pressure in the throwing direction is from large to small until the touch is ended.
In practical application, the interactive gesture with direction (quick throwing operation with throwing direction) can be realized through touch pressure from large to small, for example, the direction from large to small of the touch pressure is along the throwing direction, so that the throwing direction of the quick throwing operation can be accurately determined, the throwing effect with direction can be conveniently operated by a user, in addition, the quick throwing operation from large to small until the touch is finished can be distinguished from the dragging operation with unchanged touch pressure, the operation is implemented for a virtual object, and confusion of the quick throwing operation and the dragging operation is avoided.
In some embodiments, the process from the quick-throw operation to the completion of the placement of the item may be embodied by the movement of the item to exhibit the true effect of the item being thrown. As an example, the step S420 may include the steps of:
And f), controlling the first virtual object to move from the first action area to the target area and to be placed in the target area.
In the embodiment of the disclosure, the process from the quick throwing operation to the completion of the placement of the article can be embodied by the moving process of the article so as to show the true effect of throwing the article.
In some embodiments, if there is an overlapping problem in the target area placed according to the above-mentioned object placement method, the throwing may be failed directly, or the target area may be adjusted by the above-mentioned object placement method, that is, the object may be placed in a manner of being adjacent to the overlapping object along the throwing direction.
As one example, the virtual object also contains a third virtual object that has been placed in the region; after step S410, the method may further include the steps of:
and g), if the target area is overlapped with the third action area, displaying prompt information of placement failure in the graphical user interface.
Wherein the third active area is an active area in which the third virtual object is placed. For example, as shown in fig. 7, an article a (second virtual object) and an article B (third virtual object) that have been placed in the area, an article C (first virtual object) that has not been placed in the area, the article placement space is a grid of 5*5, in which article a has occupied 3*3 grids therein, and article B occupies 1*3 grids. At this time, if one article C with a position of 2*3 is dragged to overlap with the article a, the article C is thrown downward. According to this method, the article C should be placed in the area immediately adjacent to the underside of the article a. But at this point the available placement area between item a and item B is insufficient to drop item C, so placement fails and failure information will be fed back through tips.
In practical application, if the article A is dragged to the position B, the quick-throw operation is performed to the upper side of the position B. However, at this time, the article C is present above the article B, and the gap between the article B and the article C cannot be placed in the article a. At this time, the placement operation cannot be completed only by the scheme of the patent, and a corresponding feedback mechanism is set, namely, a prompt message of placement failure is displayed.
In the embodiment of the disclosure, if the target area to which the article placement manner in the steps S410 to S420 is placed has an overlapping problem, the throwing failure may be directly caused, and the placement failure is prompted, so as to prompt the user to perform the quick throwing operation again.
As another example, the virtual object also contains a fourth virtual object that has been placed in the region; after step S410, the method may further include the steps of:
and h) if the target area overlaps with the fourth action area, controlling the first virtual object to be placed in an area which is closest to the fourth action area along the throwing direction.
Wherein the fourth active area is an active area in which the fourth virtual object is placed. Illustratively, item A (second virtual object) and item D (fourth virtual object) that have been placed in the area, item C (first virtual object) that has not been placed in the area, item A already occupies 3*3 of the grids, item D occupies 1*3 of the grids, and the item placement space is a grid of 5*5. At this time, if one article C with a position of 2*3 is dragged to overlap with the article a, the article C is thrown downward. According to this method, the article C should be placed in the area immediately adjacent to the underside of the article a. But at this point the available placement area between article a and article D is insufficient to drop article C, it can be adjusted, i.e. it is again continued to advance in the throwing direction to a position adjacent to article D, which is determined as the placement position of article C.
In the embodiment of the disclosure, if the overlapping problem still exists in the target area placed according to the object placement manners in the steps S410 to S420, the position adjustment may be performed, that is, the object may be continuously moved to the position adjacent to the second overlapping object along the throwing direction, and the position is determined as the final placement position, so that the user is prevented from repeating the operation again, and the operation cost of the user is reduced.
Fig. 8 provides a schematic structural diagram of a control device for a virtual object. The device can be applied to terminal equipment; providing a graphical user interface through a terminal device, wherein at least part of virtual scenes are displayed in the graphical user interface, the virtual scenes comprise virtual objects and areas for placing the virtual objects, and the virtual objects comprise first virtual objects which are not placed in the areas and second virtual objects which are placed in the areas. As shown in fig. 8, the control device 800 of the virtual object includes:
a determining module 801, configured to, in a case where a first action area overlaps a second action area, determine a throwing direction corresponding to a fast throwing operation for the first virtual object in response to the fast throwing operation; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
A control module 802, configured to control the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
By the method, when the placed virtual object and the unset virtual object are overlapped, the unset virtual object can be controlled to be placed in the area which is closest to the placed virtual object along the throwing direction of the quick throwing operation of the user, the effect that the model adsorption effect can be achieved through the operation of the user when the placed object is overlapped in a occupying position is achieved, the direction of the user operation is met, and even if the size of the virtual object is different from the actual occupying position so that the user cannot accurately identify the actual occupying position of the virtual object, the user can rapidly, accurately and efficiently complete the placing operation of the adjacent model, and the placing efficiency of the user on the model in a virtual scene is improved.
In one possible embodiment, the apparatus further comprises:
the judging module is used for judging whether the first action area and the second action area are overlapped or not;
and the first display module is used for displaying the prompt information overlapped between the first action area and the second action area in the graphical user interface if the first action area and the second action area are overlapped.
In one possible embodiment, the judging module is specifically configured to:
controlling the first virtual object to move along with the drag operation in response to the drag operation for the first virtual object, and determining the first action area to which the first virtual object moves currently;
and judging whether the second action area and the first action area to which the first virtual object is currently moved overlap or not.
In one possible embodiment, the control module is further configured to:
and if no overlapping occurs, controlling the first virtual object to be placed in the first action area corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, the control module is further configured to: if overlapping occurs, continuing to respond to the drag operation aiming at the first virtual object, and controlling the first virtual object to move along with the drag operation; and in the case that no overlap occurs between the second action region and the first action region, controlling the first virtual object to be placed in the first action region corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, the quick-throw operation includes an operation in which the touch pressure in the throwing direction is from large to small until the touch is ended.
In one possible embodiment, the control module is specifically configured to:
the first virtual object is controlled to move from the first action area to the target area and be placed in the target area.
In one possible implementation, the virtual object further comprises a third virtual object that has been placed in the region; the apparatus further comprises:
the second display module is used for displaying prompt information of placement failure in the graphical user interface if the target area is overlapped with the third action area; wherein the third active area is an active area in which the third virtual object is placed.
In one possible implementation, the virtual object further comprises a fourth virtual object that has been placed in the region;
the control module is also used for: if the target area overlaps with a fourth action area, controlling the first virtual object to be placed in an area that is closest to the fourth action area in the throwing direction; wherein the fourth active area is an active area in which the fourth virtual object is placed.
The control device for the virtual object provided by the embodiment of the present disclosure has the same technical characteristics as the control method for the virtual object provided by the foregoing embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
Fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure, including: a processor 901, a storage medium 902 and a bus 903, the storage medium 902 storing machine readable instructions executable by the processor 901, the processor 901 and the storage medium 902 communicating via the bus 903 when the electronic device runs a control method of a virtual object as in the embodiment, the processor 901 executing the machine readable instructions, the preamble of the processor 901 method item to perform the steps of:
in the case that the first action area and the second action area overlap, determining a throwing direction corresponding to a quick throwing operation for the first virtual object in response to the quick throwing operation; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
Controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
By the method, when the placed virtual object and the unset virtual object are overlapped, the unset virtual object can be controlled to be placed in the area which is closest to the placed virtual object along the throwing direction of the quick throwing operation of the user, the effect that the model adsorption effect can be achieved through the operation of the user when the placed object is overlapped in a occupying position is achieved, the direction of the user operation is met, and even if the size of the virtual object is different from the actual occupying position so that the user cannot accurately identify the actual occupying position of the virtual object, the user can rapidly, accurately and efficiently complete the placing operation of the adjacent model, and the placing efficiency of the user on the model in a virtual scene is improved.
In one possible embodiment, in the case where the first action region overlaps the second action region, before determining a throwing direction corresponding to the fast throw operation for the first virtual object, the processor is further configured to:
judging whether the first action area and the second action area are overlapped or not;
And if the first action region and the second action region are overlapped, displaying the overlapped prompt information between the first action region and the second action region in the graphical user interface.
In a possible embodiment, the processor 901 is specifically configured to, when executing the determination of whether the first action region and the second action region overlap with each other:
controlling the first virtual object to move along with the drag operation in response to the drag operation for the first virtual object, and determining the first action area to which the first virtual object moves currently;
and judging whether the second action area and the first action area to which the first virtual object is currently moved overlap or not.
In a possible embodiment, after determining whether an overlap between the first region of action and the second region of action has occurred, the processor is further configured to:
and if no overlapping occurs, controlling the first virtual object to be placed in the first action area corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, after determining whether an overlap occurs between the second region of action and the first region of action to which the first virtual object is currently moving, the processor is further configured to:
If overlapping occurs, continuing to respond to the drag operation aiming at the first virtual object, and controlling the first virtual object to move along with the drag operation;
and in the case that no overlap occurs between the second action region and the first action region, controlling the first virtual object to be placed in the first action region corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, the quick-throw operation includes an operation in which the touch pressure in the throwing direction is from large to small until the touch is ended.
In a possible embodiment, the processor 901 is specifically configured to, when executing control of placing the first virtual object in a target area:
the first virtual object is controlled to move from the first action area to the target area and be placed in the target area.
In one possible embodiment, the virtual object further comprises a third virtual object that has been placed in the region; in the case where the first action region overlaps the second action region, after determining a throwing direction corresponding to a fast throw operation for the first virtual object in response to the fast throw operation, the processor is further configured to:
If the target area is overlapped with the third action area, displaying prompt information of placement failure in the graphical user interface; wherein the third active area is an active area in which the third virtual object is placed.
In one possible embodiment, the virtual object further comprises a fourth virtual object that has been placed in the region; in the case where the first action region overlaps the second action region, after determining a throwing direction corresponding to a fast throw operation for the first virtual object in response to the fast throw operation, the processor is further configured to:
if the target area overlaps with a fourth action area, controlling the first virtual object to be placed in an area that is closest to the fourth action area in the throwing direction; wherein the fourth active area is an active area in which the fourth virtual object is placed.
In practical applications, the memory 901 may include a high-speed random access memory (Random Access Memory, simply referred to as RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. Communication connection between the system network element and at least one other network element is achieved through at least one communication interface 904 (which may be wired or wireless), which may use the internet, a wide area network, a local network, a metropolitan area network, etc.
Bus 903 may be an ISA bus, a PCI bus, or an EISA bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in fig. 9, but not only one bus or one type of bus.
The memory 901 is configured to store a program, and the processor 902 executes the program after receiving an execution instruction, where a method executed by an apparatus defined by a process disclosed in any embodiment of the disclosure may be applied to the processor 902 or implemented by the processor 902.
The processor 902 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 902. The processor 902 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 901, and a processor 902 reads information in the memory 901, and in combination with its hardware, performs the steps of the above method.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
in the case that the first action area and the second action area overlap, determining a throwing direction corresponding to a quick throwing operation for the first virtual object in response to the quick throwing operation; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
By the method, when the placed virtual object and the unset virtual object are overlapped, the unset virtual object can be controlled to be placed in the area which is closest to the placed virtual object along the throwing direction of the quick throwing operation of the user, the effect that the model adsorption effect can be achieved through the operation of the user when the placed object is overlapped in a occupying position is achieved, the direction of the user operation is met, and even if the size of the virtual object is different from the actual occupying position so that the user cannot accurately identify the actual occupying position of the virtual object, the user can rapidly, accurately and efficiently complete the placing operation of the adjacent model, and the placing efficiency of the user on the model in a virtual scene is improved.
In one possible embodiment, in the case where the first action region overlaps the second action region, before determining a throwing direction corresponding to the fast throw operation for the first virtual object, the processor is further configured to:
judging whether the first action area and the second action area are overlapped or not;
and if the first action region and the second action region are overlapped, displaying the overlapped prompt information between the first action region and the second action region in the graphical user interface.
In a possible embodiment, the processor is specifically configured to, when performing the determination of whether an overlap occurs between the first region of action and the second region of action:
controlling the first virtual object to move along with the drag operation in response to the drag operation for the first virtual object, and determining the first action area to which the first virtual object moves currently;
and judging whether the second action area and the first action area to which the first virtual object is currently moved overlap or not.
In a possible embodiment, after determining whether an overlap between the first region of action and the second region of action has occurred, the processor is further configured to:
And if no overlapping occurs, controlling the first virtual object to be placed in the first action area corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, after determining whether an overlap occurs between the second region of action and the first region of action to which the first virtual object is currently moving, the processor is further configured to:
if overlapping occurs, continuing to respond to the drag operation aiming at the first virtual object, and controlling the first virtual object to move along with the drag operation;
and in the case that no overlap occurs between the second action region and the first action region, controlling the first virtual object to be placed in the first action region corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
In one possible embodiment, the quick-throw operation includes an operation in which the touch pressure in the throwing direction is from large to small until the touch is ended.
In a possible embodiment, the processor, when executing control of the first virtual object being placed in the target area, is specifically configured to:
The first virtual object is controlled to move from the first action area to the target area and be placed in the target area.
In one possible embodiment, the virtual object further comprises a third virtual object that has been placed in the region; in the case where the first action region overlaps the second action region, after determining a throwing direction corresponding to a fast throw operation for the first virtual object in response to the fast throw operation, the processor is further configured to:
if the target area is overlapped with the third action area, displaying prompt information of placement failure in the graphical user interface; wherein the third active area is an active area in which the third virtual object is placed.
In one possible embodiment, the virtual object further comprises a fourth virtual object that has been placed in the region; in the case where the first action region overlaps the second action region, after determining a throwing direction corresponding to a fast throw operation for the first virtual object in response to the fast throw operation, the processor is further configured to:
if the target area overlaps with a fourth action area, controlling the first virtual object to be placed in an area that is closest to the fourth action area in the throwing direction; wherein the fourth active area is an active area in which the fourth virtual object is placed.
In the embodiments of the present disclosure, the computer program may also execute other machine readable instructions when executed by a processor to perform the method as described in other embodiments, and the specific implementation of the method steps and principles are referred to in the description of the embodiments and are not described in detail herein.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
As another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the control method of the virtual object according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure. Are intended to be within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A control method of a virtual object, characterized in that a graphical user interface is provided through a terminal device, at least part of a virtual scene is displayed in the graphical user interface, the virtual scene comprises a virtual object and an area for placing the virtual object, and the virtual object comprises a first virtual object which is not placed in the area and a second virtual object which is placed in the area; the method comprises the following steps:
in the case that the first action area and the second action area overlap, determining a throwing direction corresponding to a quick throwing operation for the first virtual object in response to the quick throwing operation; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
2. The method of claim 1, wherein in response to a fast throw operation for the first virtual object with the first and second regions of action overlapping, prior to determining a throw direction corresponding to the fast throw operation, further comprising:
Judging whether the first action area and the second action area are overlapped or not;
and if the first action region and the second action region are overlapped, displaying the overlapped prompt information between the first action region and the second action region in the graphical user interface.
3. The method of claim 2, wherein the determining whether overlap between the first region of action and the second region of action occurs comprises:
controlling the first virtual object to move along with the drag operation in response to the drag operation for the first virtual object, and determining the first action area to which the first virtual object moves currently;
and judging whether the second action area and the first action area to which the first virtual object is currently moved overlap or not.
4. The method of claim 2, further comprising, after said determining whether overlap occurs between the first region of action and the second region of action:
and if no overlapping occurs, controlling the first virtual object to be placed in the first action area corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
5. The method of claim 3, further comprising, after said determining whether overlap occurs between the second region of action and the first region of action to which the first virtual object is currently moving:
if overlapping occurs, continuing to respond to the drag operation aiming at the first virtual object, and controlling the first virtual object to move along with the drag operation;
and in the case that no overlap occurs between the second action region and the first action region, controlling the first virtual object to be placed in the first action region corresponding to the time of the placement operation in response to the placement operation for the first virtual object.
6. The method of claim 1, wherein the quick-throw operation comprises an operation in which a touch pressure in the throwing direction is from large to small until touch is ended.
7. The method of claim 1, wherein the controlling the placement of the first virtual object in the target area comprises:
the first virtual object is controlled to move from the first action area to the target area and be placed in the target area.
8. The method of claim 1, wherein the virtual object further comprises a third virtual object that has been placed in the region;
in the case that the first action area and the second action area overlap, after determining a throwing direction corresponding to the fast throwing operation in response to the fast throwing operation for the first virtual object, the method further includes:
if the target area is overlapped with the third action area, displaying prompt information of placement failure in the graphical user interface; wherein the third active area is an active area in which the third virtual object is placed.
9. The method of claim 1, wherein the virtual object further comprises a fourth virtual object that has been placed in the region;
in the case that the first action area and the second action area overlap, after determining a throwing direction corresponding to the fast throwing operation in response to the fast throwing operation for the first virtual object, the method further includes:
if the target area overlaps with a fourth action area, controlling the first virtual object to be placed in an area that is closest to the fourth action area in the throwing direction; wherein the fourth active area is an active area in which the fourth virtual object is placed.
10. A control device for virtual objects, characterized in that a graphical user interface is provided through a terminal device, at least part of virtual scenes are displayed in the graphical user interface, the virtual scenes comprise virtual objects and areas for placing the virtual objects, and the virtual objects comprise first virtual objects which are not placed in the areas and second virtual objects which are already placed in the areas; the device comprises:
a determining module, configured to determine, in response to a fast throw operation for the first virtual object, a throwing direction corresponding to the fast throw operation in a case where a first action area overlaps a second action area; the first action area is an action area of the first virtual object, which corresponds to the area currently, and the second action area is an action area of the second virtual object placed in the area;
the control module is used for controlling the first virtual object to be placed in a target area; wherein the target area is an area in the throwing direction and closest to the second action area.
11. An electronic terminal comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the method of any of the preceding claims 1 to 9.
12. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any one of claims 1 to 9.
CN202310004780.6A 2023-01-03 2023-01-03 Virtual object control method and device and electronic terminal Pending CN116099198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310004780.6A CN116099198A (en) 2023-01-03 2023-01-03 Virtual object control method and device and electronic terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310004780.6A CN116099198A (en) 2023-01-03 2023-01-03 Virtual object control method and device and electronic terminal

Publications (1)

Publication Number Publication Date
CN116099198A true CN116099198A (en) 2023-05-12

Family

ID=86257520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310004780.6A Pending CN116099198A (en) 2023-01-03 2023-01-03 Virtual object control method and device and electronic terminal

Country Status (1)

Country Link
CN (1) CN116099198A (en)

Similar Documents

Publication Publication Date Title
US10496267B2 (en) Operation method and terminal device
CN110955370B (en) Switching method and device of skill control in game and touch terminal
CN107678644B (en) Image processing method and mobile terminal
US20180150215A1 (en) Method for implementing dialing keypad of touch screen and smart watch
CN107066188B (en) A kind of method and terminal sending screenshot picture
WO2020215959A1 (en) Game object control method and apparatus
CN107479818B (en) Information interaction method and mobile terminal
CN104049866A (en) Mobile terminal and method and device for achieving screen splitting of mobile terminal
WO2015106510A1 (en) Screen splitting method and device for applications, intelligent terminal and storage medium
CN108366169B (en) Notification message processing method and mobile terminal
WO2020015462A1 (en) Timing transmission method, electronic device and storage medium
CN107562262B (en) Method for responding touch operation, terminal and computer readable storage medium
WO2016173307A1 (en) Message copying method and device, and smart terminal
CN110347327B (en) Item editing method and touch terminal
CN116594616A (en) Component configuration method and device and computer readable storage medium
CN116099198A (en) Virtual object control method and device and electronic terminal
CN107526496B (en) Interface display method and device and mobile terminal
CN111729296B (en) Game interface interaction method and device and electronic terminal
CN107168600B (en) Push message checking method and device
CN105404439B (en) Folder creating method and device
CN112764862A (en) Application program control method and device and electronic equipment
CN113926186A (en) Method and device for selecting virtual object in game and touch terminal
CN113485593A (en) Display control method, display control device, electronic device, and medium
CN108536540B (en) Method and device for acquiring mouse message of desktop icon
CN116850577A (en) Method and device for processing layers in game and electronic terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination