CN113577766A - Object processing method and device - Google Patents

Object processing method and device Download PDF

Info

Publication number
CN113577766A
CN113577766A CN202110898692.6A CN202110898692A CN113577766A CN 113577766 A CN113577766 A CN 113577766A CN 202110898692 A CN202110898692 A CN 202110898692A CN 113577766 A CN113577766 A CN 113577766A
Authority
CN
China
Prior art keywords
image
target object
game
user interface
graphical user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110898692.6A
Other languages
Chinese (zh)
Other versions
CN113577766B (en
Inventor
王斐
王珂欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN202110898692.6A priority Critical patent/CN113577766B/en
Publication of CN113577766A publication Critical patent/CN113577766A/en
Application granted granted Critical
Publication of CN113577766B publication Critical patent/CN113577766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an object processing method and device, and relates to the field of augmented reality in computer technology. The specific implementation scheme is as follows: and responding to a first touch operation acted on the first operation control, and acquiring a first image shot by the camera device at the current moment. And if the first image comprises at least one object, acquiring the object position of the at least one object in the first image. If the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is located at a position corresponding to a preset area in the graphical user interface. And executing preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object, wherein the control object is an object associated with the first terminal equipment. By shooting the image of the real scene, the shooting simulation game can be effectively realized based on the real scene without depending on additional equipment, so that the flexibility of the game is effectively improved.

Description

Object processing method and device
Technical Field
The present disclosure relates to the field of augmented reality in computer technologies, and in particular, to an object processing method and apparatus.
Background
With the continuous development of mobile communication technology, more and more mobile-end games, such as shooting games and the like, are emerging.
The existing two-dimensional shooting game can not meet the requirements of users generally, so that the real person simulated shooting game appears at present, in the prior art, when the real person simulated shooting game is realized, the user is required to wear related shooting equipment generally, the shooting equipment can emit laser for example to simulate shooting operation, and the shooting equipment can also realize induction, so that the simulation hit is realized.
However, the implementation of real-person simulated shooting games by wearing shooting devices often places high demands on the field and the devices, which results in a low flexibility of the game.
Disclosure of Invention
The disclosure provides an object processing method and device.
According to a first aspect of the present disclosure, there is provided an object processing method applied to a first terminal device including an image capturing apparatus and a screen, where the image capturing apparatus is configured to capture an image, the screen is configured to display a graphical user interface, and the graphical user interface includes the image captured by the image capturing apparatus and a first operation control, the method including:
responding to a first touch operation acting on the first operation control, and acquiring a first image shot by the camera at the current moment;
if the first image comprises at least one object, acquiring the object position of the at least one object in the first image;
if a target object is determined to exist in the at least one object according to the object position of the at least one object, acquiring a first packet corresponding to the target object, wherein the target object is located at a position corresponding to a preset area in the graphical user interface;
and executing preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
According to a second aspect of the present disclosure, there is provided an object processing apparatus applied to a first terminal device including an image pickup apparatus and a screen, the image pickup apparatus being configured to capture an image, the screen being configured to display a graphical user interface including the image captured by the image pickup apparatus and a first operation control, the object processing apparatus including:
the first acquisition module is used for responding to a first touch operation acting on the first operation control and acquiring a first image shot by the camera at the current moment;
a second obtaining module, configured to obtain an object position of at least one object in the first image if the first image includes the at least one object;
a third obtaining module, configured to obtain a first packet corresponding to a target object if it is determined that the target object exists in the at least one object according to an object position of the at least one object, where the target object is located at a position corresponding to a preset area in the graphical user interface;
and the processing module is used for executing preset operation according to the first packet corresponding to the target object and the second packet corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
Techniques according to the present disclosure improve the flexibility of game operations.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic view of an application scenario provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of an object processing method provided by an embodiment of the present disclosure;
fig. 3 is a second flowchart of an object processing method provided in the embodiment of the present disclosure;
fig. 4 is a schematic diagram of an implementation of an object segmentation process provided in the embodiment of the present disclosure;
fig. 5 is an implementation schematic diagram of an object existing in a position corresponding to a preset area according to an embodiment of the present disclosure;
fig. 6 is an implementation schematic diagram of an object not existing in a position corresponding to a preset area provided in the embodiment of the present disclosure;
fig. 7 is an implementation schematic diagram of a partial image of a determination target object provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating an implementation of associating storage object information according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating an implementation of determining an object identifier of a target object according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating an implementation of a supplemental control provided by an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating an implementation of a game state provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of an implementation of game information synchronization provided by an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an object processing apparatus according to an embodiment of the present disclosure;
fig. 14 is a block diagram of an electronic device for implementing an object processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to better understand the technical solution of the present disclosure, the related art related to the present disclosure is further described in detail below.
With the continuous development of mobile communication technology, various mobile terminal games are emerging at present, and in the mobile terminal games, a category is shooting games, and at present, when the shooting games are implemented in terminal equipment in the prior art, at least one interactive virtual object is usually displayed in a graphical user interface, and a user can control a target virtual object, and then the target virtual object is controlled to execute preset shooting operations on the interactive virtual object, so that a two-dimensional shooting game is implemented. However, implementing only a two-dimensional shooting game in a terminal device may result in a lack of realism in the game.
Therefore, the current two-dimensional shooting game cannot meet the requirements of users generally, a real person simulated shooting game appears at present, and in the prior art, when the real person simulated shooting game is realized, the user is required to wear related shooting equipment, for example, the shooting equipment can emit laser to simulate shooting operation, and the shooting equipment can also realize induction, so that hit simulation is realized.
However, in an implementation mode of implementing a real person simulated shooting game by wearing shooting equipment, high requirements are generally placed on fields and equipment, and the game cannot be played anytime and anywhere, so that the flexibility of the game is low.
Aiming at the problems in the prior art, the technical concept is as follows: the shooting device of the terminal equipment shoots a real scene, then carries out object identification based on the shot real scene, and presets operation and the like on the identified object, so that shooting games based on Augmented Reality (AR) can be realized, real person simulated shooting games can be effectively realized based on the terminal equipment, extra equipment is not required to be relied on, and the flexibility of game operation can be effectively improved.
First, an application scenario of the present disclosure is described with reference to fig. 1, where fig. 1 is a schematic view of the application scenario provided in the embodiment of the present disclosure.
As shown in fig. 1, the object processing method provided by the embodiment of the present disclosure may be applied to a terminal device 101, where the terminal device 101 may include an image pickup device and a screen 102.
The camera device in this embodiment is used to capture an image of a real scene, and the screen 102 is used to display a graphical user interface, where the graphical user interface is a computer operation user interface displayed in a graphical manner, and allows a user to use an input device to manipulate icons or menu controls on the screen, where the input device may be, for example, a mouse, or may also be, for example, a touch screen, and the like.
In a possible implementation manner, the graphical user interface may include an image captured by a camera, as shown in fig. 1, the camera of the current terminal device may capture a real scene, and then display the image captured by the camera in the graphical user interface, it can be understood that, during the game, the camera may continuously capture the image to ensure that the graphical user interface of the screen of the terminal device always displays the image of the real scene during the game.
And the graphical user interface in this embodiment may further include a first operation control, and in a possible implementation manner, the game in this embodiment may be a shooting game, for example, and the corresponding first operation control may be the shooting control 103 shown in fig. 1, for example, where the first operation control may respond to an operation of a user, so as to trigger a corresponding preset operation, for example, to execute a virtual shooting action, and the like.
In a possible implementation manner, the first operation control may be, for example, as shown in fig. 1, displayed in an upper layer of an image captured by the image capturing device in an overlapping manner, or in an actual implementation process, the image captured by the image capturing device may also be displayed on one side of the graphical user interface, and the first operation control is displayed on the other side of the graphical user interface.
The first terminal device introduced in the present disclosure may be, for example, a mobile phone (or referred to as a "cellular" phone), a tablet computer, or a computer device, a portable, pocket, handheld, or computer-embedded mobile device or device, and the like, and is not particularly limited herein, and a specific implementation of the first terminal device may be selected according to actual needs, and any device that includes a camera and a screen and can be used to execute the object processing method in the present disclosure may be used as the first terminal device in the present embodiment.
Based on the above introduction, the following describes in detail the object processing method provided by the embodiment of the present disclosure with reference to fig. 2, and fig. 2 is a flowchart of the object processing method provided by the embodiment of the present disclosure.
As shown in fig. 2, the method includes:
s201, responding to a first touch operation acted on a first operation control, and acquiring a first image shot by the camera at the current moment.
In this embodiment, a user may perform a first touch operation on a first operation control, where the first operation control is described in the above embodiments, and details are not described here. In addition, the first touch operation in this embodiment may be, for example, a click operation, a long-press operation, a sliding operation, and the like, and the specific implementation manner of the first touch operation is not particularly limited in this embodiment, and all the operations for triggering the function corresponding to the first operation control may be used as the first touch operation in this embodiment.
In this embodiment, the first operation control is used to trigger a corresponding virtual shooting operation, and since the virtual shooting operation in this embodiment is implemented based on a real scene of shooting, a first touch operation that acts on the first operation control can be performed to acquire a first image captured by the image capture device at the current time.
It can be understood that during the game, the camera device will continuously take images and display them on the graphical user interface of the terminal device, but when the user operates the first operation control, it indicates that the user needs to perform a virtual shooting operation at the current time, so that the first image at the current time can be obtained, where the first image is an image of a real scene currently taken by the camera device.
S202, if the first image comprises at least one object, acquiring the object position of the at least one object in the first image.
In a possible implementation manner, the captured first image may include at least one object, for example, where the object in this embodiment may be a person, and the inclusion of the at least one object in the first image indicates that a person exists in the capturing range of the imaging device when the imaging device captures the first image. Or based on different application scenarios, the object included in the first image may also be the rest of the implementations, such as buildings, animals, etc.
When the first image includes at least one object, in this embodiment, an object position of the at least one object in the first image may be obtained, where the object position may include, for example, positions of a plurality of boundary points of the first object in the first image, or the object position may further include a position of a center point of the first image in the first image.
In another possible implementation manner, the captured first image may not include the object, for example, when the object is a person, it indicates that the captured first image does not include the object, and when the first image is captured by the image capturing device, no person exists in the capturing range of the image capturing device, and at this time, it is not necessary to determine the position of the object.
S203, if the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is located at a position corresponding to a preset area in the graphical user interface.
After determining the object position of each object, it may be determined whether a target object exists in at least one object according to the object position of at least one object, where the target object in this embodiment is located at a position corresponding to a preset area in the graphical user interface, and in a possible implementation manner, the preset area may be, for example, an area with a preset size located at a central point of the graphical user interface, which may be understood as a sight bead during a shooting operation, and then the position corresponding to the preset area is a position of the sight bead, that is, when performing a virtual shooting operation, a user needs to aim a shooting sight bead at an object, and then the virtual shooting operation can be implemented.
If the object position of the at least one object is currently determined, for example, it may be determined whether an object exists at a position corresponding to a preset area of the graphical user interface according to the object position of the at least one object, and if so, the object at the position corresponding to the preset area is determined as a target object, which is actually an object aimed at by the current virtual shooting operation.
It can be understood that, in general, users are grouped when a shooting game is played, users in the same group can be understood as teammates, users in different groups can be understood as opponents, and in the shooting game, virtual shooting operation is performed on the opponents generally, and virtual shooting operation is not performed on the teammates.
Therefore, in this embodiment, after the target object is determined, the first group corresponding to the target object may also be obtained, and in a possible implementation manner, the group corresponding to each object may be stored in a preset storage unit, for example, and then the first group corresponding to the target object may be obtained from the preset storage unit, for example.
And S204, executing preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
In this embodiment, the object associated with the current first terminal device is a control object, and the control object may be understood as a user currently operating the first terminal device, so that the current virtual shooting operation is actually the virtual shooting operation performed by the control object on the target object through the first terminal device, and based on the above description, it may be determined that the trigger operations corresponding to the opponent and the teammate are different, so that in this embodiment, the relationship between the target object and the control object may be determined according to the first group corresponding to the target object and the second group corresponding to the control object, and then the corresponding preset operation may be performed.
In one possible implementation, if the first and second groups are different groups, which means that the target object and the control object are not in the same team but in an opponent relationship, the preset operation may be, for example, a virtual shooting operation performed on the target object; alternatively, if the first and second groups are the same group, indicating that the target object and the control object are not in the same team but in a teammate relationship, the preset operation may be, for example, displaying a prompt message in the graphical user interface, where the prompt message indicates that the first and second groups are the same group.
In the actual implementation process, the specific implementation manner of the preset operation may also be correspondingly extended according to actual requirements, as long as it is ensured that the preset operation is a corresponding operation determined according to the first grouping and the second grouping.
The object processing method provided by the embodiment of the disclosure comprises the following steps: and responding to a first touch operation acted on the first operation control, and acquiring a first image shot by the camera device at the current moment. And if the first image comprises at least one object, acquiring the object position of the at least one object in the first image. If the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is located at a position corresponding to a preset area in the graphical user interface. And executing preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object, wherein the control object is an object associated with the first terminal equipment. The image of a real scene is shot through the camera device, then the first image of the first touch operation at the current moment is obtained according to the first touch operation on the first operation control, the aimed target object is determined according to the position of each object in the first image, and the preset operation is correspondingly executed according to the grouping of the target object and the grouping of the control objects, so that the real person simulated shooting game can be effectively realized based on the real scene, extra equipment and fields are not required to be relied on, and the flexibility of the game is effectively improved.
On the basis of the foregoing embodiment, the following describes in further detail an object processing method provided by the present disclosure with reference to fig. 3 to 9, where fig. 3 is a second flowchart of the object processing method provided by the present disclosure, fig. 4 is an implementation schematic diagram of object segmentation processing provided by the present disclosure, fig. 5 is an implementation schematic diagram that an object exists at a position corresponding to a preset region provided by the present disclosure, fig. 6 is an implementation schematic diagram that an object does not exist at a position corresponding to the preset region provided by the present disclosure, fig. 7 is an implementation schematic diagram that a partial image of a determined target object provided by the present disclosure is provided, fig. 8 is an implementation schematic diagram that associated stored object information provided by the present disclosure is provided, and fig. 9 is an implementation schematic diagram that an object identifier of the determined target object provided by the present disclosure is provided by the present disclosure.
As shown in fig. 3, the method includes:
s301, responding to a first touch operation acted on a first operation control, and acquiring a first image shot by the camera at the current moment.
The implementation manner of S301 is similar to that of S201, and is not described herein again.
S302, if the first image comprises at least one object, performing object segmentation processing on the first image, and determining the at least one object comprised in the first image.
In this embodiment, when the first image includes at least one object, the position of each object in the first image may be determined, and in a possible implementation manner, for example, the object segmentation processing may be performed on the first image, so as to determine the at least one object included in the first image, where a specific implementation of the object segmentation processing may be according to any one of possible object segmentation algorithms, which is not limited in this embodiment.
For example, it can be understood with reference to fig. 4, as shown in fig. 4, assuming that the currently acquired first image is an image shown in 401 in fig. 4, it can be determined based on fig. 4 that at least one object may be included in the first image, and when determining whether the object is included in the first image, the first image may be processed according to an object recognition algorithm, for example, or according to an object recognition model, so as to determine whether the object is included in the first image, which is not particularly limited by the embodiment.
When it is determined that at least one object is included in the first image, the object segmentation process may be performed on the first image, so as to determine at least one object included in the first image, and referring to fig. 4, it is assumed that after the object segmentation process is currently performed on the first image 401, an object 402, an object 403, an object 404, an object 405, an object 406, and an object 407 may be determined, and the segmentation result of each object is symbolically represented by a rectangular box in fig. 4. In an actual implementation process, the segmentation result for each object may also be represented by a set of pixel points, for example, for any determined object, the currently segmented object may be represented by a set of pixel points included in the first image of the object.
In the present embodiment, an implementation is described in which at least one object is included in the first image, and in an alternative implementation, the object may not be included in the first image, that is, at the current time when the user performs the first touch operation on the first operation control, the object does not exist within the shooting range of the image capturing apparatus, and at this time, it may be understood that the user has triggered a shooting operation, but has not recognized an interactive object, and in this case, for example, a virtual shooting operation, such as displaying a shooting animation, or the like, may still be triggered, but the shooting operation does not cause any damage to any object.
Alternatively, for example, preset information may be displayed on the terminal device to prompt that the user does not currently have the interactable object, for example, text information of a preset style may be displayed, or the terminal device may be controlled to vibrate, or a preset animation and the like may be displayed on the graphical user interface to prompt that the user does not currently have the interactable object, and a specific prompting manner may be selected according to actual needs as long as it is to prompt that the user does not currently have the interactable object.
S303, determining the position of each object in the first image.
After determining at least one object included in the first image, the position of each object in the first image may be determined, and in one possible implementation, for example, the position of the region of the rectangular frame in the first image in the object segmentation result in fig. 4 may be determined as the position of the object in the first image. Or, the position of the pixel point set corresponding to each object in the object segmentation result in the first image may also be determined as the position of the object in the first image, and this embodiment does not limit the specific implementation manner of the position of each object in the first image.
S304, determining the position of each object in the graphical user interface according to the object position of at least one object in the first image, wherein the size of the first image is the same as that of the graphical user interface.
After determining the position of each object in the first image, the target object may exist in the at least one object according to the object position of the at least one object, and the target object in this embodiment is located at a position corresponding to a preset region in the graphical user interface.
It is understood that the first image in this embodiment is an image displayed in the graphical user interface after being captured by the image capturing device, and therefore the size of the first image in this embodiment is the same as the size of the graphical user interface, and therefore, in a possible implementation manner, the object position of the first object in the first image may be directly determined as the position of the first image in the graphical user interface.
For example, the position of the first object in the first image is represented by the position of a rectangular frame, for example, the four vertices of the rectangular frame are respectively (x1, y1), (x2, y2), (x3, y3), (x4, y4) in the first image, the position of the first object in the graphical user interface can also be represented by the rectangular frame, and the coordinates of the rectangular frame in the graphical user interface are also respectively (x1, y1), (x2, y2), (x3, y3), (x4, y 4). Here, an exemplary description is given by taking a coordinate representation as an example, in an actual implementation process, a specific representation manner of the position may be selected according to an actual requirement, for example, the position may also be represented by a distance between a vertex and a boundary, and the like, which is not limited in this embodiment.
S305, if the position of the object in the graphical user interface is located at the position corresponding to the preset area, the object at the position corresponding to the preset area is determined as the target object.
After the positions of the objects in the graphical user interface are determined, the target object can be determined according to the position corresponding to the preset area in the graphical user interface and the positions of the objects in the graphical user interface.
In a possible implementation manner, for example, it may be determined whether a position of the object in the graphical user interface is located at a position corresponding to the preset region in at least one object, and if the position of the object in the graphical user interface is located at the position corresponding to the preset region, the object at the position corresponding to the preset region is determined as the target object.
As may be appreciated in conjunction with fig. 5, for example, as shown in fig. 5, a first image is displayed in the graphical user interface, the first image including a plurality of objects, including for example object 502, object 503, object 504, object 505, object 506 and object 507 shown in figure 5, wherein, each object has a corresponding position in the graphical user interface, and the position corresponding to the preset area in the graphical user interface may be, for example, the position of the circular area indicated by 501 in fig. 5, which may be understood as a sight of a shot, it is worth mentioning that the preset area exemplarily described in the present fig. 5 corresponds to a position where a circular area located at the center of the gui is located, in an actual implementation process, the position, shape, size, and the like of the preset region may be selected according to actual requirements, which is not limited in this embodiment.
Based on fig. 5, it may be determined that, in the example of fig. 5, the position of the object 505 in the graphical user interface is located at the position 505 corresponding to the preset region, and then the object 505 at the position 501 corresponding to the preset region may be determined as the target object.
In a possible implementation manner, the position corresponding to the preset area may only include one object, that is, in the case illustrated in fig. 5, the one object may be determined as the target object. Or, in a possible implementation manner, a plurality of objects may be further included at the position corresponding to the preset region, and at this time, for example, the plurality of objects included at the position corresponding to the preset region may all be determined as the target object; or a target object may be determined from a plurality of objects included in the position corresponding to the preset region, where the target object may be, for example, an object whose position coincides with a center point of the position corresponding to the preset region, or the target object may be an object whose position is closest to the center point of the position corresponding to the preset region, or may also be an object located at the position corresponding to the preset region and having the largest area, and the like.
It can be understood that, in this embodiment, the meaning of "the position of the object in the graphical user interface is located at the position corresponding to the preset region" may be that a region corresponding to the position of the object in the graphical user interface and a position corresponding to the preset region have an intersection, or may also be that the region corresponding to the position of the object in the graphical user interface is all included in the position corresponding to the preset region, which may depend on game design, for example, this embodiment does not limit this, and it may be selected according to actual needs.
It should be noted that, what is described above is an implementation manner in which, in at least one object, a position of the object in the graphical user interface is located at a position corresponding to the preset region, and in another possible implementation manner, there may also be no object in a position corresponding to the preset region, for example, as can be understood with reference to fig. 6, see fig. 6, there is no object in a position 601 corresponding to the preset region of the graphical user interface.
In this case, it can be understood that the user triggered a firing operation, but no object is present at the sight of the shot, i.e. the object is not currently aimed at, in which case, for example, a virtual firing operation, such as displaying a firing animation or the like, may still be triggered, but this firing operation does not cause any damage to any object.
Alternatively, for example, preset information may be displayed on the terminal device to prompt the user that the object is not currently aimed, for example, text information in a preset pattern may be displayed, or the terminal device may be controlled to vibrate, or a preset animation and the like may be displayed on the graphical user interface to prompt the user that the object is not currently aimed, where a specific prompting manner may be selected according to actual needs as long as it is to prompt the user that the object is not currently aimed.
And S306, acquiring a partial image corresponding to the target object in the first image.
After determining the target object, in this embodiment, it is necessary to determine the first group corresponding to the target object, and further determine how to perform the preset operation next.
In a possible implementation manner, for example, image recognition may be performed on the target object, for convenience of processing, and for improvement of processing efficiency, for example, a partial image corresponding to the target object may be first obtained in the first image, which may be understood by referring to fig. 7, as shown in fig. 7, in the first image 701, assuming that 702 is the currently determined target object, a partial image corresponding to the target object 702 is obtained from the first image 701, and for example, an image shown in 703 in fig. 7 may be obtained.
In a possible implementation manner, for example, the partial image corresponding to the target image may be obtained by cropping the first image, and the present embodiment does not limit the specific size, shape, and the like of the partial image as long as the partial image is obtained from the first image and the target object is included in the partial image.
S307, matching the partial image corresponding to the target object with the image information of at least one object, and determining the object identification of the target object.
After determining the partial image of the target object, for example, the partial image of the target object may be identified to determine the current group corresponding to the target object, and in a possible implementation, for example, before acquiring the first group corresponding to the target object, the following operations may be performed:
receiving image information of at least one object, an object identifier of each of the at least one object and a group corresponding to each of the at least one object; for any object, the image information of the object and the grouping corresponding to the object are stored in association with the object identifier of the object.
In a possible implementation manner, for example, before the game starts, for example, image information of each user participating in the game may be entered using a mobile device, so as to receive image information of at least one object, and each user may input a corresponding object identifier and a group, where the object identifier may be, for example, a name, a nickname, and the like of the user.
For example, as can be understood in conjunction with fig. 8, referring to fig. 8, it is assumed that the object identifiers of the objects currently participating in the game include zhang, lie, wang di, lie i, wherein zhang is correspondingly grouped into group 1, and the image information corresponding to zhang is shown in fig. 8; similarly, the corresponding grouping of lie four is group 1, and the image information corresponding to zhang three is shown in fig. 8; the image information corresponding to WangII is grouped into group 2, and Zhang III is shown in FIG. 8; the image information corresponding to lie one is grouped into group 2, and the image information corresponding to zhang three is shown in fig. 8, as shown in fig. 8, the image information of each object, the grouping of each object, and the respective object identifier are respectively associated and stored.
The content described in fig. 8 is only an exemplary description, and in an actual implementation process, a specific implementation manner of the object identifier and a specific implementation manner of the image information and the group associated with each object identifier may be selected according to an actual requirement, which is not limited in this embodiment.
Based on the above description, when determining the group corresponding to the target object, for example, the partial image corresponding to the target object and the image information of at least one object may be first matched, so as to determine the object identifier of the target object.
In a possible implementation manner, for example, the partial image of the target object and the image information of at least one object may be matched to determine the matching degree between the partial image of the target object and each image information, and then the object identifier associated with the image information with the highest matching degree is determined as the object identifier of the target object.
For example, as shown in fig. 9, it is assumed that the image information of the current at least one object includes image information 902, image information 903, image information 904, and image information 905 shown in fig. 9, and a partial image 901 of the target image may be matched with the image information 902, the image information 903, the image information 904, and the image information 905, respectively, to determine the image information with the highest matching degree.
Assuming that the matching degree between the currently determined image information 902 and the partial image 901 of the target image is the highest, for example, the object identifier "zhang san" associated with the image information 902 may be determined as the object identifier of the target object, and the current process may also be understood as determining that the identity of the target object is "zhang san".
S308, determining the group associated with the object identifier of the target object as a first group corresponding to the target object.
After determining the object identifier of the target object, it may be understood that the identity of the target object has been determined, and based on the above description, it may be determined that the object identifier of each object stores a group of each object in association, so that the group associated with the object identifier of the target object may be determined as the first group corresponding to the target object.
In a possible implementation manner, assuming that the above example is continued to be used for understanding, for example, it is determined that the object identifier of the target object is "zhangsan", it may be determined based on fig. 8 that the group associated with the object identifier "zhangsan" is group 1, and it may be determined that the first group corresponding to the target object is group 1.
In an actual implementation process, the grouping dividing manner and the number of the divided groupings may be selected according to actual requirements, for example, there may also be a group 3, a group 4, and the like, which is not limited in this embodiment.
S309, determine whether the first packet and the second packet are different packets, if yes, execute S310, and if no, execute S311.
After determining the first group of target objects, it may be determined whether the first group of target objects and the second group corresponding to the control object are different groups.
In a possible implementation manner, assuming that the target object is zhang, and the object currently holding the first terminal device for playing the game is songxi, it can be understood that whether zhang and songxi are in different groups is judged currently, and whether zhang and songxi are teammates or opponents is further determined.
And S310, updating the attribute value corresponding to the target object.
In a possible implementation, if the first group and the second group are different groups, which indicates that the target object and the control object are opponent relationships, it can be determined through the above-mentioned series of operations that the current shooting sight is aimed at the target object, and a virtual shooting operation can be performed on the target object.
In addition, in the shooting game, after the shooting operation is performed, virtual shooting damage may be caused to the hit object, and the attribute value corresponding to the target object needs to be updated.
In a possible implementation manner, for example, the attribute value corresponding to the target object at the current time and the preset value corresponding to the first operation control may be obtained, and then the attribute value corresponding to the target object is subtracted from the preset value to obtain an updated attribute value corresponding to the target object.
For example, if the current target object is zhang, the current virtual life value of zhang is 2000, and the preset value corresponding to the first operation control is 688, which indicates that the virtual injury caused by one virtual shooting operation is 688, the preset value 688 may be subtracted from the attribute value 2000 corresponding to the target object, so as to obtain the updated attribute value 1312 corresponding to the target object.
In an actual implementation process, the specific implementation of the attribute value and the specific implementation of the preset value corresponding to the first operation control may be selected according to an actual requirement, which is not limited in this embodiment.
And S311, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first grouping and the second grouping are the same grouping.
In another possible implementation manner, if the first group and the second group are the same group, it indicates that the target object and the control object are in a teammate relationship, and no harm is caused to teammates, for example, a prompt message may be displayed in a graphical user interface to prompt a user that the first group corresponding to the current target object and the second group corresponding to the control object are the same group. The prompt information may be, for example, text information in a preset style, or may also be a preset color, a style of a preset style, or the like displayed around the target object.
In one possible implementation, when the first and second groups are the same group, a preset shooting animation can still be displayed on the graphical user interface, except that no virtual shooting injury is caused to the target object.
The object processing method provided by the embodiment of the disclosure can simply and effectively acquire each object in the first image and determine the position of each object by performing object segmentation processing on the first image, then determine the target object by comparing the position of each object in the graphical user interface with the position corresponding to the preset area in the graphical user interface, thereby ensuring that the determined target object is the object aimed by the current shooting operation to ensure the effectiveness of the subsequent virtual shooting operation, and simultaneously compare the partial image including the target object with each image information received in advance after determining the target object, thereby quickly and effectively determining the first group of the target object, and then perform corresponding preset operation according to the first group of the target object and the second group of the control object, therefore, the AR shooting game can be realized based on the terminal equipment, and the operation flexibility of the game can be effectively improved.
On the basis of the above embodiment, in the object processing method provided by the present disclosure, in addition to the first operation control, a supplementary control may be displayed in the graphical user interface, it can be understood that a bullet changing operation needs to be performed in a shooting game, and the supplementary control in this embodiment may be, for example, a control that triggers a bullet changing operation.
For example, as can be understood in conjunction with fig. 10, fig. 10 is an implementation diagram of a supplementary control provided by the embodiment of the present disclosure.
It can be understood that, in the above schematic diagrams, vertical screen display and operation are displayed, and in an actual implementation process, horizontal screen display and operation shown in fig. 10 may also be performed, as shown in fig. 10, a first image may also be displayed in the graphical user interface, where the first image includes an object 1004 and an object 1005, where 1003 is a position corresponding to a preset region of the graphical user interface, that is, a sight of shooting, and 1001 is a first operation control, that is, a shooting control, and these implementation manners are similar to those described above, and are not described again here.
And in the graphical user interface shown in fig. 10, a supplementary control 1002, that is, the resize control shown in fig. 10, may be further included, in a possible implementation manner, for example, the number of controllable targets corresponding to the first terminal device may be updated to a preset number in response to a second touch operation of the supplementary control acting on the graphical user interface.
In the present embodiment, the controllable target may be understood as, for example, a bullet that performs virtual shooting, for example, referring to fig. 10, assuming that the number of remaining bullets at the current time is 8, after responding to the second touch operation applied to the supplement control 1002, the number of remaining bullets may be updated to a preset number, and in the actual implementation process, a specific implementation manner of the preset number is not particularly limited, and it may be selected and set according to actual requirements.
It can be understood that, in the present fig. 10, the supplementary control is introduced in a landscape mode, and for the implementation of the display and operation of the portrait screen, the supplementary control may also be provided, and the implementation is similar to the implementation currently described for the landscape screen, and is not described here again.
By setting the supplementary control and responding to the operation of the supplementary control, the number of the controllable targets corresponding to the first terminal equipment can be updated, and the operability and the integrity of the shooting game can be ensured.
In the object processing method provided by the embodiment of the present disclosure, after the attribute value corresponding to the target object is updated, it may be further determined whether the updated attribute value corresponding to the target object is less than or equal to a first value, where the first value in this embodiment is a quantity used for measuring whether the object is outgoing, for example, the first value may be 0, and in an actual implementation process, a specific implementation manner of the first value may be selected according to an actual requirement.
In a possible implementation manner, if it is determined that the updated attribute value corresponding to the target object is less than or equal to the first numerical value, the game state corresponding to the target object may be marked as the end state.
For example, when the attribute value of the target object "zhang san" is 0 or less than 0, it may be determined that the target object "zhang san" has no life value in the current game, and the target object "zhang san" is played from the current game, and then the game state corresponding to the target object "zhang san" may be marked as an end state to indicate that "zhang san" is played.
In addition, on the basis of the above embodiments, the object processing method provided by the present disclosure may further determine whether the game is ended after the game state corresponding to the target object is marked as the end state, and it is understood that when all the remaining objects in the game are a team, the game is determined to be ended.
In a possible implementation manner, according to the respective game states of the objects, remaining game objects whose game states are not the end states may be acquired, then, respective groups corresponding to the remaining game objects are determined, and if the remaining game objects are in the same group, end information used for indicating that the current game is ended is displayed on the screen of the first terminal device.
For example, fig. 11 may be referred to for understanding, and fig. 11 is a schematic diagram of implementation of a game state provided by the embodiment of the present disclosure.
For example, the objects currently participating in the game include zhang san, lie si, wang bi, liu yi and song xi, it is assumed that zhang san and lie si are of group 1, and wang bi, liu yi and song xi are of group 2, and it is assumed that the game states of the respective objects are: and if the rest game objects are liu one and song six, and liu one and song six are in the same group, it can be determined that the current game is ended, and ending information can be displayed on the screen of the first terminal device.
In the actual implementation process, the specific implementation manner of the end information may be selected according to actual requirements, which is not limited in this embodiment as long as the end information can indicate that the current game is ended.
In addition, in another possible implementation manner, if the remaining game objects are not in the same group, it may be determined that the current game has not ended, and the operations described in the above embodiments may be continuously performed.
Based on the content described in the foregoing embodiments, it may be determined that all the execution subjects of the foregoing embodiments are the first terminal devices, and in the actual implementation process, the terminal devices held by the objects in the same game should implement synchronization of data and information, for example, after the current first terminal device updates the attribute value of the target object, the updated information should be synchronized to the other terminal devices, so as to ensure synchronization of game data.
In a possible implementation manner, the first terminal device may send the game information determined by the first terminal device to the server, so that the server synchronizes the game information to at least one second terminal device, where the second terminal device is a terminal device corresponding to each game object except the control object, and the game information includes at least one of the following: updated attribute values corresponding to the target objects, game states of the respective game objects, and end information.
For example, as can be understood with reference to fig. 12, fig. 12 is a schematic diagram of implementing game information synchronization provided by the embodiment of the present disclosure.
As shown in fig. 12, the first terminal device 1201 may send the game information to the server 1202, and then the server 1202 synchronizes the game information to the second terminal device 1203, the second terminal device 1204, and the second terminal device 1204, in an actual implementation process, a specific number of the second terminal devices may be selected according to an actual requirement, and all terminal devices in the same game as the first terminal device may be used as the second terminal device in this embodiment.
For example, the object currently participating in the game includes zhang san, lie si, wang bi, liu yi and song shi xi, and assuming that the terminal device held by song shi is the first terminal device, the terminal devices held by zhang san, lie si, wang bi and liu yi may all be the second terminal devices, and it can be understood that, in the actual implementation process, any one of the terminal devices participating in the game may be the first terminal device in this embodiment, and the remaining terminal devices automatically become the second terminal devices.
In the actual implementation process, the specific implementation manner of the game information can be selected and expanded according to actual requirements, and it can be understood that all the game related information required to be synchronized can be used as the game information in this embodiment.
The game information is synchronized among the terminal devices, so that the correct implementation of the game can be ensured, and the effectiveness and the correctness of the game can be ensured.
In summary, the object processing method according to the embodiment of the present disclosure can simply and quickly implement an AR-based real-person simulated shooting game by using a terminal device, without requiring a specific field and device, thereby effectively improving effective flexibility, effectively saving game cost, and widening the range of target users.
Fig. 13 is a schematic structural diagram of an object processing apparatus according to an embodiment of the present disclosure. As shown in fig. 13, the object processing apparatus 1300 of the present embodiment may include: a first obtaining module 1301, a second obtaining module 1302, a third obtaining module 1303, and a processing module 1304.
A first obtaining module 1301, configured to obtain, in response to a first touch operation applied to the first operation control, a first image captured by the imaging apparatus at a current time;
a second obtaining module 1302, configured to obtain an object position of at least one object in the first image if the first image includes the at least one object;
a third obtaining module 1303, configured to obtain a first packet corresponding to the target object if it is determined that the target object exists in the at least one object according to the object position of the at least one object, where the target object is located at a position corresponding to a preset area in the graphical user interface;
a processing module 1304, configured to execute a preset operation according to the first packet corresponding to the target object and the second packet corresponding to a control object, where the control object is an object associated with the first terminal device.
In a possible implementation manner, the processing module 1304 is specifically configured to:
if the first grouping and the second grouping are different, updating the attribute value corresponding to the target object;
and if the first packet and the second packet are the same packet, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first packet and the second packet are the same packet.
In a possible implementation manner, the processing module 1304 is specifically configured to:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
and subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
In a possible implementation manner, the second obtaining module 1302 is specifically configured to:
performing object segmentation processing on the first image, and determining at least one object included in the first image;
the position of each of the objects in the first image is determined.
In a possible implementation manner, the third obtaining module 1303 is specifically configured to:
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image, wherein the size of the first image is the same as that of the graphical user interface;
if the position of the object in the graphical user interface is located at the position corresponding to the preset area, the object at the position corresponding to the preset area is determined as the target object.
In a possible implementation manner, the processing module 1304 is further configured to:
before acquiring a first group corresponding to the target object, receiving image information of at least one object, an object identifier of each object and a group corresponding to each object;
and for any one object, storing the image information of the object and the grouping corresponding to the object in association with the object identification of the object.
In a possible implementation manner, the third obtaining module 1303 is specifically configured to:
acquiring a partial image corresponding to the target object in the first image;
matching the partial image corresponding to the target object with the image information of the at least one object, and determining the object identifier of the target object;
and determining the group associated with the object identifier of the target object as a first group corresponding to the target object.
In a possible implementation manner, the processing module 1304 is further configured to:
and responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of the controllable targets corresponding to the first terminal equipment to a preset number.
In a possible implementation manner, the processing module 1304 is further configured to:
after the attribute value corresponding to the target object is updated, if the updated attribute value corresponding to the target object is determined to be smaller than or equal to a first numerical value, the game state corresponding to the target object is marked as an end state.
In a possible implementation manner, the processing module 1304 is further configured to:
after the game state corresponding to the target object is marked as an end state, acquiring the remaining game objects of which the game states are not the end states according to the respective game states of the objects;
determining a group corresponding to each of the remaining game objects;
and if all the remaining game objects are grouped in the same way, displaying ending information on a screen of the first terminal equipment, wherein the ending information is used for indicating the end of the current game.
In a possible implementation manner, the processing module 1304 is further configured to:
sending the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
the second terminal device is a terminal device corresponding to each game object except the control object, and the game information includes at least one of the following: the updated attribute value corresponding to the target object, the game state of each game object and the ending information.
The present disclosure provides an object processing method and apparatus, which are applied to the field of augmented reality in computer technology to achieve the purpose of improving the flexibility of game operations.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 14 shows a schematic block diagram of an example electronic device 1400 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the electronic device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The calculation unit 1401, the ROM 1402, and the RAM 1403 are connected to each other via a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
Various components in device 1400 connect to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 1401 executes the respective methods and processes described above, such as the object processing method. For example, in some embodiments, the object handling methods may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1400 via ROM 1402 and/or communication unit 1409. When the computer program is loaded into the RAM 1403 and executed by the computing unit 1401, one or more steps of the object processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured to perform the object processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (25)

1. An object processing method is applied to a first terminal device comprising a camera device and a screen, wherein the camera device is used for shooting images, the screen is used for displaying a graphical user interface, the graphical user interface comprises the images shot by the camera device and a first operation control, and the method comprises the following steps:
responding to a first touch operation acting on the first operation control, and acquiring a first image shot by the camera at the current moment;
if the first image comprises at least one object, acquiring the object position of the at least one object in the first image;
if a target object is determined to exist in the at least one object according to the object position of the at least one object, acquiring a first packet corresponding to the target object, wherein the target object is located at a position corresponding to a preset area in the graphical user interface;
and executing preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
2. The method of claim 1, wherein the performing a preset operation according to the first grouping corresponding to the target object and the second grouping corresponding to the control object comprises:
if the first grouping and the second grouping are different, updating the attribute value corresponding to the target object;
and if the first packet and the second packet are the same packet, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first packet and the second packet are the same packet.
3. The method of claim 2, wherein the updating the attribute value corresponding to the target object comprises:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
and subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
4. The method according to any one of claims 1-3, wherein said obtaining an object position of said at least one object in said first image comprises:
performing object segmentation processing on the first image, and determining at least one object included in the first image;
the position of each of the objects in the first image is determined.
5. The method of claim 4, wherein the determining that a target object exists in the at least one object according to the object location of the at least one object comprises:
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image, wherein the size of the first image is the same as that of the graphical user interface;
if the position of the object in the graphical user interface is located at the position corresponding to the preset area, the object at the position corresponding to the preset area is determined as the target object.
6. The method of any of claims 2-5, prior to obtaining the first packet corresponding to the target object, the method further comprising:
receiving image information of at least one object, an object identifier of each object and a group corresponding to each object;
and for any one object, storing the image information of the object and the grouping corresponding to the object in association with the object identification of the object.
7. The method of claim 6, wherein the obtaining the first group corresponding to the target object comprises:
acquiring a partial image corresponding to the target object in the first image;
matching the partial image corresponding to the target object with the image information of the at least one object, and determining the object identifier of the target object;
and determining the group associated with the object identifier of the target object as a first group corresponding to the target object.
8. The method of any of claims 1-7, further comprising:
and responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of the controllable targets corresponding to the first terminal equipment to a preset number.
9. The method according to any one of claims 1-8, after updating the attribute value corresponding to the target object, the method further comprising:
and if the updated attribute value corresponding to the target object is determined to be less than or equal to the first numerical value, marking the game state corresponding to the target object as an end state.
10. The method of claim 9, after marking the game state corresponding to the target object as an end state, the method further comprising:
acquiring the remaining game objects of which the game states are not the end states according to the respective game states of the objects;
determining a group corresponding to each of the remaining game objects;
and if all the remaining game objects are grouped in the same way, displaying ending information on a screen of the first terminal equipment, wherein the ending information is used for indicating the end of the current game.
11. The method according to any one of claims 2-10, further comprising:
sending the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
the second terminal device is a terminal device corresponding to each game object except the control object, and the game information includes at least one of the following: the updated attribute value corresponding to the target object, the game state of each game object and the ending information.
12. An object processing apparatus applied to a first terminal device including an image pickup apparatus and a screen, the image pickup apparatus being configured to capture an image, the screen being configured to display a graphical user interface including the image captured by the image pickup apparatus and a first operation control, includes:
the first acquisition module is used for responding to a first touch operation acting on the first operation control and acquiring a first image shot by the camera at the current moment;
a second obtaining module, configured to obtain an object position of at least one object in the first image if the first image includes the at least one object;
a third obtaining module, configured to obtain a first packet corresponding to a target object if it is determined that the target object exists in the at least one object according to an object position of the at least one object, where the target object is located at a position corresponding to a preset area in the graphical user interface;
and the processing module is used for executing preset operation according to the first packet corresponding to the target object and the second packet corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
13. The apparatus of claim 12, wherein the processing module is specifically configured to:
if the first grouping and the second grouping are different, updating the attribute value corresponding to the target object;
and if the first packet and the second packet are the same packet, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first packet and the second packet are the same packet.
14. The apparatus of claim 13, wherein the processing module is specifically configured to:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
and subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
15. The apparatus according to any one of claims 12 to 14, wherein the second obtaining module is specifically configured to:
performing object segmentation processing on the first image, and determining at least one object included in the first image;
the position of each of the objects in the first image is determined.
16. The apparatus according to claim 15, wherein the third obtaining module is specifically configured to:
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image, wherein the size of the first image is the same as that of the graphical user interface;
if the position of the object in the graphical user interface is located at the position corresponding to the preset area, the object at the position corresponding to the preset area is determined as the target object.
17. The apparatus of any of claims 13-16, the processing module further to:
before acquiring a first group corresponding to the target object, receiving image information of at least one object, an object identifier of each object and a group corresponding to each object;
and for any one object, storing the image information of the object and the grouping corresponding to the object in association with the object identification of the object.
18. The apparatus according to claim 17, wherein the third obtaining module is specifically configured to:
acquiring a partial image corresponding to the target object in the first image;
matching the partial image corresponding to the target object with the image information of the at least one object, and determining the object identifier of the target object;
and determining the group associated with the object identifier of the target object as a first group corresponding to the target object.
19. The apparatus of any of claims 12-18, the processing module further to:
and responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of the controllable targets corresponding to the first terminal equipment to a preset number.
20. The apparatus of any of claims 12-19, the processing module further to:
after the attribute value corresponding to the target object is updated, if the updated attribute value corresponding to the target object is determined to be smaller than or equal to a first numerical value, the game state corresponding to the target object is marked as an end state.
21. The apparatus of claim 20, the processing module further to:
after the game state corresponding to the target object is marked as an end state, acquiring the remaining game objects of which the game states are not the end states according to the respective game states of the objects;
determining a group corresponding to each of the remaining game objects;
and if all the remaining game objects are grouped in the same way, displaying ending information on a screen of the first terminal equipment, wherein the ending information is used for indicating the end of the current game.
22. The apparatus of any of claims 13-21, the processing module further to:
sending the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
the second terminal device is a terminal device corresponding to each game object except the control object, and the game information includes at least one of the following: the updated attribute value corresponding to the target object, the game state of each game object and the ending information.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
CN202110898692.6A 2021-08-05 2021-08-05 Object processing method and device Active CN113577766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110898692.6A CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110898692.6A CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Publications (2)

Publication Number Publication Date
CN113577766A true CN113577766A (en) 2021-11-02
CN113577766B CN113577766B (en) 2024-04-02

Family

ID=78255572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110898692.6A Active CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Country Status (1)

Country Link
CN (1) CN113577766B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092675A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Image display method, image display device, electronic apparatus, and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010041034A1 (en) * 2008-10-09 2010-04-15 Isis Innovation Limited Visual tracking of objects in images, and segmentation of images
US20130044912A1 (en) * 2011-08-19 2013-02-21 Qualcomm Incorporated Use of association of an object detected in an image to obtain information to display to a user
US20140344762A1 (en) * 2013-05-14 2014-11-20 Qualcomm Incorporated Augmented reality (ar) capture & play
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN108066981A (en) * 2016-11-12 2018-05-25 金德奎 A kind of AR or MR method for gaming identified based on position and image and system
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN111672119A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for aiming virtual object
CN111821691A (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Interface display method, device, terminal and storage medium
CN112107858A (en) * 2020-09-17 2020-12-22 腾讯科技(深圳)有限公司 Prop control method and device, storage medium and electronic equipment
CN112138385A (en) * 2020-10-28 2020-12-29 腾讯科技(深圳)有限公司 Aiming method and device of virtual shooting prop, electronic equipment and storage medium
WO2021031755A1 (en) * 2019-08-19 2021-02-25 Oppo广东移动通信有限公司 Interactive method and system based on augmented reality device, electronic device, and computer readable medium
CN112991157A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010041034A1 (en) * 2008-10-09 2010-04-15 Isis Innovation Limited Visual tracking of objects in images, and segmentation of images
US20130044912A1 (en) * 2011-08-19 2013-02-21 Qualcomm Incorporated Use of association of an object detected in an image to obtain information to display to a user
US20140344762A1 (en) * 2013-05-14 2014-11-20 Qualcomm Incorporated Augmented reality (ar) capture & play
CN108066981A (en) * 2016-11-12 2018-05-25 金德奎 A kind of AR or MR method for gaming identified based on position and image and system
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
WO2021031755A1 (en) * 2019-08-19 2021-02-25 Oppo广东移动通信有限公司 Interactive method and system based on augmented reality device, electronic device, and computer readable medium
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN111672119A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for aiming virtual object
CN111821691A (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Interface display method, device, terminal and storage medium
CN112107858A (en) * 2020-09-17 2020-12-22 腾讯科技(深圳)有限公司 Prop control method and device, storage medium and electronic equipment
CN112138385A (en) * 2020-10-28 2020-12-29 腾讯科技(深圳)有限公司 Aiming method and device of virtual shooting prop, electronic equipment and storage medium
CN112991157A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
奶牛关: "MOBA类游戏为什么不开友军伤害?", pages 1 - 3, Retrieved from the Internet <URL:https://cowlevel.net/question/1928712> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092675A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Image display method, image display device, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN113577766B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US20200393953A1 (en) Method and apparatus, computer device, and storage medium for picking up a virtual item in a virtual environment
JP7035185B2 (en) Image processing methods, electronic devices, and storage media
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
US10532271B2 (en) Data processing method for reactive augmented reality card game and reactive augmented reality card game play device, by checking collision between virtual objects
KR102629359B1 (en) Virtual object attack prompt method and device, and terminal and storage medium
CN110465087B (en) Virtual article control method, device, terminal and storage medium
JP2018512988A (en) Information processing method, terminal, and computer storage medium
WO2022247592A1 (en) Virtual prop switching method and apparatus, terminal, and storage medium
US11020663B2 (en) Video game with automated screen shots
CN113350802B (en) Voice communication method, device, terminal and storage medium in game
WO2021227684A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
US11918900B2 (en) Scene recognition method and apparatus, terminal, and storage medium
CN111672109A (en) Game map generation method, game testing method and related device
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
WO2023138192A1 (en) Method for controlling virtual object to pick up virtual prop, and terminal and storage medium
CN108211363B (en) Information processing method and device
US20220355202A1 (en) Method and apparatus for selecting ability of virtual object, device, medium, and program product
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
KR20220098355A (en) Methods and apparatus, devices, media, and articles for selecting a virtual object interaction mode
CN113577766B (en) Object processing method and device
WO2021244237A1 (en) Virtual object control method and apparatus, computer device, and storage medium
CN112843719A (en) Skill processing method, skill processing device, storage medium and computer equipment
CN115040875A (en) Method and device for exchanging virtual weapons in game, storage medium and electronic equipment
CN114053714A (en) Virtual object control method and device, computer equipment and storage medium
CN113769386A (en) Method and device for displaying virtual object in game and electronic terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant