CN113577766B - Object processing method and device - Google Patents

Object processing method and device Download PDF

Info

Publication number
CN113577766B
CN113577766B CN202110898692.6A CN202110898692A CN113577766B CN 113577766 B CN113577766 B CN 113577766B CN 202110898692 A CN202110898692 A CN 202110898692A CN 113577766 B CN113577766 B CN 113577766B
Authority
CN
China
Prior art keywords
image
target object
game
target
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110898692.6A
Other languages
Chinese (zh)
Other versions
CN113577766A (en
Inventor
王斐
王珂欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110898692.6A priority Critical patent/CN113577766B/en
Publication of CN113577766A publication Critical patent/CN113577766A/en
Application granted granted Critical
Publication of CN113577766B publication Critical patent/CN113577766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Abstract

The disclosure provides an object processing method and device, and relates to the field of augmented reality in computer technology. The specific implementation scheme is as follows: and responding to a first touch operation acted on the first operation control, and acquiring a first image shot by the image pickup device at the current moment. And if the first image comprises at least one object, acquiring the object position of the at least one object in the first image. If the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is positioned at a position corresponding to a preset area in the graphical user interface. And executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment. By shooting the image of the real scene, the simulated shooting game can be effectively realized based on the real scene, and no extra equipment is needed, so that the flexibility of the game is effectively improved.

Description

Object processing method and device
Technical Field
The present disclosure relates to the field of augmented reality in computer technology, and in particular, to an object processing method and apparatus.
Background
With the continuous development of mobile communication technology, more and more mobile terminal games, such as shooting games, are currently emerging.
Current two-dimensional shooting games generally cannot meet the needs of users, so that a real person simulated shooting game is present, and in the prior art, when implementing the real person simulated shooting game, users are generally required to wear related shooting devices, which can emit laser, for example, to simulate shooting operations, and the shooting devices can also implement sensing, so as to simulate hitting.
However, by wearing a shooting device to implement a real-person simulated shooting game, there is generally a high demand on the field and the device, resulting in a low flexibility of the game.
Disclosure of Invention
The disclosure provides an object processing method and device.
According to a first aspect of the present disclosure, there is provided an object processing method applied to a first terminal device including an image capturing device for capturing an image and a screen for displaying a graphical user interface including the image captured by the image capturing device and a first operation control, including:
Responding to a first touch operation acting on the first operation control, and acquiring a first image shot by the camera device at the current moment;
if the first image comprises at least one object, acquiring an object position of the at least one object in the first image;
if the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is positioned at a position corresponding to a preset area in the graphical user interface;
and executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
According to a second aspect of the present disclosure, there is provided an object processing apparatus applied to a first terminal device including an image capturing apparatus for capturing an image and a screen for displaying a graphical user interface including the image captured by the image capturing apparatus and a first operation control, including:
the first acquisition module is used for responding to a first touch operation acted on the first operation control and acquiring a first image shot by the camera device at the current moment;
The second acquisition module is used for acquiring the object position of at least one object in the first image if the first image comprises the at least one object;
a third obtaining module, configured to obtain a first group corresponding to a target object if it is determined that the target object exists in the at least one object according to an object position of the at least one object, where the target object is located at a position corresponding to a preset area in the graphical user interface;
and the processing module is used for executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
Techniques according to the present disclosure increase flexibility in game operations.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
FIG. 2 is a flow chart of an object processing method provided by an embodiment of the present disclosure;
FIG. 3 is a second flowchart of an object processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an implementation of an object segmentation process provided by an embodiment of the present disclosure;
Fig. 5 is an implementation schematic diagram of a position existence object corresponding to a preset area provided in an embodiment of the present disclosure;
fig. 6 is an implementation schematic diagram of a location where no object exists corresponding to a preset area provided in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an implementation of a partial image of a determined target object provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an implementation of associated storage object information provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of an implementation of determining object identification of a target object according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an implementation of a supplemental control provided by an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of an implementation of a game state provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of an implementation of game information synchronization provided by an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of an object processing apparatus according to an embodiment of the present disclosure;
fig. 14 is a block diagram of an electronic device used to implement an object processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
For a better understanding of the technical solutions of the present disclosure, the related art related to the present disclosure is further described in detail below.
With the continuous development of mobile communication technology, various mobile terminal games are currently developed, one category in the mobile terminal games is shooting games, at present, when shooting games are implemented in terminal devices in the prior art, at least one interactive virtual object is usually displayed in a graphical user interface, a user can control a target virtual object, and then the target virtual object is controlled to perform preset shooting operation on the interactive virtual object, so that two-dimensional shooting games are implemented. However, implementing a two-dimensional shooting game only in the terminal device may result in a lack of realism of the game.
Therefore, the current two-dimensional shooting game cannot generally meet the requirements of users, and a real person simulated shooting game appears at present, and in the prior art, when implementing the real person simulated shooting game, the user is generally required to wear related shooting equipment, which can emit laser to simulate shooting operation, for example, and the shooting equipment can also implement induction so as to simulate hit.
However, by wearing the shooting device to implement a real person simulated shooting game, there is generally a high requirement on the place and the device, and the game cannot be played anytime and anywhere, which results in low flexibility of the game.
Aiming at the problems in the prior art, the present disclosure proposes the following technical ideas: the shooting device of the terminal equipment shoots a real scene, then performs object recognition based on the shot real scene, and performs preset operation and the like on the recognized object, so that shooting games based on augmented reality (Augmented Reality, AR) can be realized, and therefore, the real person simulated shooting games can be effectively realized based on the terminal equipment without depending on additional equipment, and the flexibility of game operation can be effectively improved.
The application scenario of the present disclosure is first described with reference to fig. 1, and fig. 1 is a schematic diagram of the application scenario provided by an embodiment of the present disclosure.
As shown in fig. 1, the object processing method provided by the embodiment of the present disclosure may be applied to a terminal apparatus 101, where the terminal apparatus 101 may include an image pickup device and a screen 102.
The camera device in this embodiment is used for shooting an image of a real scene, and the screen 102 is used for displaying a graphical user interface, where the graphical user interface is a computer operation user interface displayed in a graphical manner, which allows a user to use an input device to manipulate icons or menu controls on the screen, where the input device may be, for example, a mouse, or may be, for example, a touch screen, etc., which is not limited in this embodiment, and the user performs an operation through the graphical user interface during a game process, so as to implement game interaction.
In a possible implementation manner, the image captured by the image capturing device may be included in the graphical user interface, as shown in fig. 1, the image capturing device of the current terminal device may capture a real scene, and then the image captured by the image capturing device is displayed in the graphical user interface, where it is understood that during the game, the image capturing device may continuously capture images, so as to ensure that the graphical user interface of the screen of the terminal device always displays the image of the real scene during the game.
And the graphical user interface in this embodiment may further include a first operation control, in one possible implementation manner, the game in this embodiment may be, for example, a shooting game, and the corresponding first operation control may be, for example, the shooting control 103 shown in fig. 1, where the first operation control may be responsive to an operation of a user, so as to trigger a corresponding preset operation, for example, perform a virtual shooting action, and so on.
In a possible implementation manner, the first operation control may be, for example, as shown in fig. 1, displayed in an overlapping manner on an upper layer of an image captured by the image capturing device, or in an actual implementation process, may also be displayed on one side of a graphical user interface, and the first operation control is displayed on the other side of the graphical user interface.
The first terminal device described in the present disclosure may be, for example, a mobile phone (or referred to as a "cellular" phone), a tablet computer, or may also be a computer device, a portable device, a pocket-sized device, a handheld device, a mobile device or a device with a built-in computer, etc., where the specific implementation of the first terminal device may be selected according to actual needs, and any device that includes a camera device and a screen and may be used to execute the object processing method in the present disclosure may be used as the first terminal device in the present embodiment.
On the basis of the above description, the method for processing an object provided by the embodiment of the present disclosure will be described in detail with reference to fig. 2, and fig. 2 is a flowchart of the method for processing an object provided by the embodiment of the present disclosure.
As shown in fig. 2, the method includes:
s201, responding to a first touch operation acted on a first operation control, and acquiring a first image shot by the image pickup device at the current moment.
In this embodiment, the user may perform the first touch operation on the first operation control, where the first operation control is described in the foregoing embodiment, which is not described herein again. In addition, the first touch operation in the embodiment may be, for example, a click operation, a long press operation, a sliding operation, etc., and the specific implementation manner of the first touch operation is not particularly limited in this embodiment, and any operation for triggering a function corresponding to the first operation control may be used as the first touch operation in the embodiment.
In this embodiment, the first operation control is used to trigger a corresponding virtual shooting operation, and because the virtual shooting operation in this embodiment is implemented based on a photographed real scene, the first operation control can be used to perform a first touch operation on the first operation control, so as to obtain a first image photographed by the photographing device at a current moment.
It can be understood that, during the game, the image capturing device continuously captures images and displays the images in the graphical user interface of the terminal device, but when the user performs the first operation control, the virtual shooting operation is required to be performed at the current moment by the user, so that the first image at the current moment can be obtained, wherein the first image is an image of the real scene currently captured by the image capturing device.
S202, if the first image comprises at least one object, acquiring the object position of the at least one object in the first image.
In one possible implementation manner, the first image may include at least one object, for example, where the object in this embodiment may be a person, and the first image includes at least one object, where when the image capturing device captures the first image, a person exists in the capturing range of the image capturing device. Or based on different application scenarios, the objects comprised in the first image may also be the remaining implementations, such as buildings, animals, etc.
When at least one object is included in the first image, the object position of the at least one object in the first image may be acquired in this embodiment, where the object position may include, for example, positions of a plurality of boundary points of the first object in the first image, or the object position may further include a position of a center point of the first image in the first image, which is not limited in this embodiment, as long as the object position may indicate position information of the object in the first image.
In another possible implementation manner, the first image may not include the object, for example, when the object is a person, the first image may be captured without including the object, and when the image capturing device captures the first image, no person exists in the capturing range of the image capturing device, then the position of the object is not required to be determined.
S203, if it is determined that the target object exists in the at least one object according to the object position of the at least one object, a first group corresponding to the target object is acquired, and the target object is located at a position corresponding to a preset area in the graphical user interface.
After determining the object position of each object, it may be determined whether there is a target object in at least one object according to the object position of at least one object, where the target object in this embodiment is located in a position corresponding to a preset area in the gui, in one possible implementation manner, the preset area may be, for example, an area with a preset size located at a center point of the gui, which may be understood as a sight during the shooting operation, and then the position corresponding to the preset area is the position where the sight is located, that is, when performing the virtual shooting operation, the user needs to aim the shot sight at a certain object, and then can perform the virtual shooting operation.
If the object position of the at least one object is currently determined, it may be determined, for example, according to the object position of the at least one object, whether an object exists at a position corresponding to the preset area of the graphical user interface, and if so, the object at the position corresponding to the preset area is determined to be a target object, where the target object is actually the object aimed by the current virtual shooting operation.
It will be appreciated that, typically, the grouping will be performed between users while playing a shooting game, the same grouping user may be understood as teammates, and different groupings of users may be understood as opponents, in which the virtual shooting operation is typically performed for the opponents, but not for teammates.
Therefore, in this embodiment, after the target object is determined, the first group corresponding to the target object may also be acquired, and in one possible implementation manner, the group corresponding to each object may be stored in a preset storage unit, for example, the first group corresponding to the target object may be acquired from the preset storage unit.
S204, executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal device.
In this embodiment, the object associated with the current first terminal device is a control object, and the control object may be understood as a user currently operating the first terminal device, so that the current virtual shooting operation is actually that the control object performs the virtual shooting operation on the target object through the first terminal device, and based on the above description, it may be determined that the triggering operations corresponding to the opponent and the teammate are different, so that in this embodiment, the relationship between the target object and the control object may be determined according to the first group corresponding to the target object and the second group corresponding to the control object, and then the corresponding preset operation is performed.
In one possible implementation manner, if the first grouping and the second grouping are different groupings, which indicates that the target object and the control object are not the same team, and are opponents, the preset operation may be, for example, performing a virtual shooting operation on the target object; or if the first group and the second group are the same group, the target object and the control object are not the same team, and are teammates, the preset operation may be, for example, displaying a prompt message in a graphical user interface, where the prompt message is used to indicate that the first group and the second group are the same group.
In the actual implementation process, the specific implementation mode of the preset operation can be correspondingly expanded according to the actual requirement, so long as the preset operation is ensured to be the corresponding operation determined according to the first grouping and the second grouping.
The object processing method provided by the embodiment of the disclosure comprises the following steps: and responding to a first touch operation acted on the first operation control, and acquiring a first image shot by the image pickup device at the current moment. And if the first image comprises at least one object, acquiring the object position of the at least one object in the first image. If the target object exists in the at least one object according to the object position of the at least one object, acquiring a first group corresponding to the target object, wherein the target object is positioned at a position corresponding to a preset area in the graphical user interface. And executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment. The image of the real scene is shot through the camera device, then the first image of the first touch operation at the current moment is acquired according to the first touch operation on the first operation control, then the aimed target object is determined according to the position of each object in the first image, and the preset operation is correspondingly executed according to the grouping of the target object and the grouping of the control object, so that the real person simulated shooting game can be effectively realized based on the real scene, no additional equipment or places are needed, and the game flexibility is effectively improved.
On the basis of the foregoing embodiments, the method for processing an object provided by the present disclosure will be described in further detail with reference to fig. 3 to 9, fig. 3 is a flowchart two of the method for processing an object provided by the embodiment of the present disclosure, fig. 4 is a schematic implementation diagram of object segmentation processing provided by the embodiment of the present disclosure, fig. 5 is a schematic implementation diagram of an object existing at a position corresponding to a preset area provided by the embodiment of the present disclosure, fig. 6 is a schematic implementation diagram of an object not existing at a position corresponding to a preset area provided by the embodiment of the present disclosure, fig. 7 is a schematic implementation diagram of a partial image of a determination target object provided by the embodiment of the present disclosure, fig. 8 is a schematic implementation diagram of associated storage object information provided by the embodiment of the present disclosure, and fig. 9 is a schematic implementation diagram of object identification of the determination target object provided by the embodiment of the present disclosure.
As shown in fig. 3, the method includes:
s301, responding to a first touch operation acted on a first operation control, and acquiring a first image shot by the image pickup device at the current moment.
The implementation of S301 is similar to that of S201, and will not be described here again.
S302, if the first image comprises at least one object, object segmentation processing is carried out on the first image, and at least one object included in the first image is determined.
In this embodiment, when at least one object is included in the first image, a position of each object in the first image may be determined, and in a possible implementation manner, for example, an object segmentation process may be performed on the first image, so as to determine at least one object included in the first image, where a specific implementation of the object segmentation process may be according to any possible object segmentation algorithm, and this embodiment is not limited to this.
For example, as can be understood with reference to fig. 4, as shown in fig. 4, assuming that the first image currently acquired is an image shown as 401 in fig. 4, it can be determined based on fig. 4 that at least one object may be included in the first image, and when determining whether the object is included in the first image, the first image may be processed, for example, according to an object recognition algorithm or according to an object recognition model, to determine whether the object is included in the first image, and the specific implementation is not particularly limited in this embodiment.
When it is determined that at least one object is included in the first image, the object segmentation process may be performed on the first image, thereby determining at least one object included in the first image, and referring to fig. 4, it is assumed that after the object segmentation process is currently performed on the first image 401, the object 402, the object 403, the object 404, the object 405, the object 406, and the object 407 may be determined, and the segmentation result of each object is symbolically represented by a rectangular frame in fig. 4. In the actual implementation process, the segmentation result for each object may be further represented by, for example, a set of pixel points, for example, for any determined object, the currently segmented object may be represented by the set of pixel points included in the first image by the object, and the specific representation manner of the object segmentation result is not particularly limited in this embodiment, so long as each object in the first image may be segmented and differentiated.
In the present embodiment, an implementation manner is described in which at least one object is included in the first image, in an alternative implementation manner, it is also possible that no object is included in the first image, that is, at the moment when the user performs the first touch operation on the first operation control, no object exists in the shooting range of the image capturing device, where it is understood that the user triggers a shooting operation, but no interactable object is identified, in which case, for example, a virtual shooting operation may still be triggered, for example, a shooting animation is displayed, and the like, only this shooting operation does not cause any harm to any object.
Or, for example, preset information may be displayed on the terminal device to prompt that no interactable object exists currently, for example, text information in a preset style may be displayed, vibration of the terminal device may be controlled, or preset animation may be displayed on a graphical user interface to prompt that no interactable object exists currently, and a specific prompting manner may be selected according to actual requirements, so long as it is to prompt that no interactable object exists currently.
S303, determining the position of each object in the first image.
After determining the at least one object comprised in the first image, the position of the respective object in the first image may be determined, and in one possible implementation, for example, the position of the region of the rectangular frame in the first image in the object segmentation result in fig. 4 may be determined as the position of the object in the first image. Or, the position of the pixel point set corresponding to each object in the first image in the object segmentation result may be determined as the position of the object in the first image, and the specific implementation manner of the position of each object in the first image is not limited in this embodiment.
S304, determining the position of each object in the graphical user interface according to the object position of at least one object in the first image, wherein the size of the first image is the same as the size of the graphical user interface.
After determining the position of each object in the first image, the target object may be present in the at least one object according to the object position of the at least one object, where the target object in this embodiment is located in a position corresponding to the preset area in the graphical user interface, so in a possible implementation, the position of each object in the graphical user interface may be determined according to the object position of the at least one object in the first image, and then the target object may be determined.
It will be appreciated that the first image in this embodiment is the image displayed in the gui after the image capturing device captures the first image, and thus the size of the first image and the size of the gui in this embodiment are the same, so in one possible implementation, the object position of the first object in the first image may be determined directly as the position of the first image in the gui.
For example, the position of the first object in the first image is represented by the position of a rectangular frame, for example, four vertices of the rectangular frame, and the coordinates in the first image are (x 1, y 1), (x 2, y 2), (x 3, y 3), (x 4, y 4), respectively, so that the position of the first object in the graphical user interface can also be represented by the rectangular frame, and the coordinates of the rectangular frame in the graphical user interface are (x 1, y 1), (x 2, y 2), (x 3, y 3), (x 4, y 4), respectively. In the present embodiment, the position is represented by coordinates, and in the actual implementation process, the specific representation manner of the position may be selected according to actual requirements, for example, the position may also be represented by a distance between a vertex and a boundary, etc., which is not limited by this embodiment.
S305, if the position of the object in the graphical user interface exists in at least one object and is located at the position corresponding to the preset area, determining the object at the position corresponding to the preset area as a target object.
After determining the position of each object in the graphical user interface, the target object can be determined according to the position corresponding to the preset area in the graphical user interface and the position of each object in the graphical user interface.
In one possible implementation manner, for example, it may be determined whether there is an object in at least one object, where the position of the object in the graphical user interface is located at a position corresponding to the preset area, and if there is an object in the position corresponding to the preset area, determining the object as the target object.
For example, as can be understood from fig. 5, as shown in fig. 5, a first image is displayed in the gui, where the first image includes a plurality of objects, for example, including the object 502, the object 503, the object 504, the object 505, the object 506, and the object 507 shown in fig. 5, where each object has a corresponding position in the gui, and a position corresponding to a preset area in the gui may be, for example, a position where a circular area indicated by 501 in fig. 5 is located, which may be understood as a shooting sight, and it is worth noting that a position corresponding to the preset area described in the present example in fig. 5 is a position where a circular area located in a central position of the gui is located.
As can be determined based on fig. 5, in the example of fig. 5, the position of the object 505 in the graphical user interface is located at the position 505 corresponding to the preset area, and then the object 505 at the position 501 corresponding to the preset area may be determined as the target object.
In one possible implementation, the location corresponding to the preset area may include only one object, that is, in the case described in fig. 5, the one object may be determined as the target object. Or, in a possible implementation manner, a plurality of objects may be further included at the position corresponding to the preset area, where, for example, the plurality of objects included at the position corresponding to the preset area may be determined as target objects; or a target object may be determined from a plurality of objects included in the position corresponding to the preset area, where the target object may be, for example, an object where the position of the object coincides with the center point of the position corresponding to the preset area, or the target object may be an object where the position of the object is closest to the center point of the position corresponding to the preset area, or may be an object where the area at the position corresponding to the preset area is the largest, or the like, where the embodiment is not particularly limited, and may be selected according to actual requirements, so long as it is ensured that the determined target object is the position in the graphical user interface at the position corresponding to the preset area.
It may be understood that the meaning of "the position of the object in the graphical user interface is located at the position corresponding to the preset area" in the present embodiment may be that the intersection exists between the area corresponding to the position of the object in the graphical user interface and the position corresponding to the preset area, or may also be that the area corresponding to the position of the object in the graphical user interface is all contained in the position corresponding to the preset area, which may depend on the game design, for example, and the present embodiment does not limit this, and may be selected according to the actual requirement.
It should be noted that, in the above description, in at least one object, there is an implementation in which the position of the object in the graphical user interface is located at the position corresponding to the preset area, in another possible implementation, there may also be no object in the position corresponding to the preset area, for example, as can be understood in conjunction with fig. 6, referring to fig. 6, there may not be an object in the position 601 corresponding to the preset area of the graphical user interface.
It is understood that the user has triggered a firing operation but that no object is present at the sight of the firing, i.e. no object is currently aimed at, in which case, for example, a virtual firing operation, such as displaying a firing animation or the like, may still be triggered, but this firing operation does not harm any object.
Or, for example, preset information may be displayed on the terminal device to prompt that the user does not aim at an object currently, for example, text information in a preset pattern may be displayed, or vibration of the terminal device may be controlled, or preset animation may be displayed on a graphical user interface to prompt that the user does not aim at an object currently, and a specific prompting mode may be selected according to actual requirements, so long as the specific prompting mode is used to prompt that the user does not aim at an object currently.
S306, acquiring a partial image corresponding to the target object in the first image.
After determining the target object, in this embodiment, it is required to determine the first group corresponding to the target object, and further determine how to execute the preset operation next.
In one possible implementation manner, for example, the target object may be identified, for convenience of processing, and for improving processing efficiency, for example, a partial image corresponding to the target object may be first acquired in a first image, and may be understood with reference to fig. 7, where in the first image 701, assuming that 702 is a currently determined target object, a partial image corresponding to the target object 702 is acquired from the first image 701, for example, an image shown by 703 in fig. 7 may be obtained.
In one possible implementation manner, for example, the partial image corresponding to the target image may be obtained by clipping the first image, and the specific size, shape, and the like of the partial image are not limited in this embodiment, so long as the partial image is obtained from the first image and the target object is included in the partial image.
S307, matching the partial image corresponding to the target object with the image information of at least one object, and determining the object identification of the target object.
After determining the partial image of the target object, for example, the current group corresponding to the target object may be determined by identifying the partial image of the target object, and in a possible implementation, for example, before acquiring the first group corresponding to the target object, the following operations may be performed:
receiving image information of at least one object, an object identifier of each of the at least one object and a group corresponding to each of the at least one object; for any one object, the image information of the object and the group corresponding to the object are stored in association with the object identification of the object.
In one possible implementation manner, for example, before the game starts, the mobile device may be used to record the image information of each user participating in the game, so as to receive the image information of at least one object, and each user may input the respective corresponding object identifier and grouping, where the object identifier may be, for example, the name, nickname, etc. of the user, and the specific implementation manner of the object identifier is not limited, so long as it may implement the distinction between different objects, so in this embodiment, the respective object identifier of each object and the respective grouping of each object may be received, and then the image information of each object, the respective grouping of each object, and the object identifier of each object may be respectively stored in association with each other.
For example, as can be understood in conjunction with fig. 8, referring to fig. 8, it is assumed that the object identifiers of the objects currently participating in the game include Zhang three, liu four, wang two, li Yi, wherein the Zhang three corresponds to group 1, and the Zhang three corresponds to image information as shown in fig. 8; likewise, the group corresponding to the Li four is group 1, and the image information corresponding to the Zhang three is shown in FIG. 8; the group corresponding to the king two is group 2, and the image information corresponding to the Zhang three is shown in fig. 8; the group corresponding to Li Yi is set 2, and the image information corresponding to Zhang Santa is shown in fig. 8, and as shown in fig. 8, the image information of each object, the group of each object, and the respective object identifications are stored in association with each other.
The description of fig. 8 is merely exemplary, and in the actual implementation process, the specific implementation manner of the object identifiers and the specific implementation manner of the image information and the packets associated with each object identifier may be selected according to actual requirements, which is not limited in this embodiment.
Based on the above description, when determining the group corresponding to the target object, for example, the partial image corresponding to the target object may be first matched with the image information of at least one object, so as to determine the object identifier of the target object.
In one possible implementation manner, for example, the partial image of the target object and the image information of at least one object may be matched, so as to determine the matching degree of the partial image of the target object and each image information, and then, the object identifier associated with the image information with the highest matching degree is determined as the object identifier of the target object.
For example, as can be understood with reference to fig. 9, as shown in fig. 9, assuming that the image information of the current at least one object includes the image information 902, the image information 903, the image information 904, and the image information 905 shown in fig. 9, the partial image 901 of the current target image may be respectively matched with the image information 902, the image information 903, the image information 904, and the image information 905, thereby determining the image information with the highest matching degree.
Assuming that the matching degree between the currently determined image information 902 and the partial image 901 of the target image is highest, for example, the object identifier "Zhang san" associated with the image information 902 may be determined as the object identifier of the target object, and the current process may be further understood that the identity of the determined target object is "Zhang san".
S308, determining the group associated with the object identification of the target object as a first group corresponding to the target object.
After determining the object identifier of the target object, it may be understood that the identity of the target object has been determined, and it may be determined based on the foregoing description that the object identifiers of the respective objects in this embodiment are associated with groups in which the respective objects are stored, so that the group associated with the object identifier of the target object may be determined as the first group corresponding to the target object.
In one possible implementation, assuming that the above example is used to understand, for example, that the object identifier of the currently determined target object is "Zhang Sanu", it may be determined based on the above fig. 8 that the group associated with the object identifier "Zhang Sanu" is group 1, and it may be determined that the first group corresponding to the target object is group 1.
In the actual implementation process, the division manner of the packets and the number of the divided packets may be selected according to actual requirements, for example, there may be a group 3, a group 4, and so on, which is not limited in this embodiment.
S309, judging whether the first packet and the second packet are different packets, if yes, executing S310, otherwise executing S311.
After determining the first group of the target object, it may be determined whether the first group of the target object and the second group corresponding to the control object are different groups.
In one possible implementation manner, assuming that the target object is Zhang three and the object currently holding the first terminal device in playing the game is Song Liu, it can be understood that whether Zhang three and Song Liu are in different groups currently is determined, and then whether Zhang three and Song Liu are teammates or opponents is determined.
S310, updating the attribute value corresponding to the target object.
In one possible implementation manner, if the first grouping and the second grouping are different groupings, and the first grouping and the second grouping represent that the target object and the control object are in an opponent relationship, it may be determined through the series of operations that the current shooting sight is aimed at the target object, and then a virtual shooting operation may be performed on the target object, and in one possible implementation manner, for example, a preset shooting animation may be displayed in a graphical user interface, so that a user may determine that the virtual shooting operation is currently performed on the target object.
In addition, since in the shooting game, virtual shooting damage is caused to the hit object after the shooting operation is performed, the attribute value corresponding to the target object needs to be updated, and the attribute value in this embodiment may be, for example, a game life value.
In one possible implementation manner, for example, an attribute value corresponding to the target object at the current moment may be obtained, a preset value corresponding to the first operation control is obtained, and then the attribute value corresponding to the target object is subtracted by the preset value to obtain an updated attribute value corresponding to the target object.
For example, if the current target object is Zhang san, assuming that the Zhang san is the current virtual life value is 2000 and the preset value corresponding to the first operation control is 688, which indicates that the virtual injury caused by one virtual shooting operation is 688, the preset value 688 may be subtracted from the attribute value 2000 corresponding to the target object, thereby obtaining the updated attribute value 1312 corresponding to the target object.
In the actual implementation process, the specific implementation of the attribute value and the specific implementation of the preset value corresponding to the first operation control can be selected according to the actual requirement, which is not limited in this embodiment.
S311, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first grouping and the second grouping are the same grouping.
In another possible implementation manner, if the first group and the second group are the same group, the target object and the control object are in a teammate relationship, and the teammate cannot be injured, for example, a prompt message may be displayed in the graphical user interface to prompt the user that the first group corresponding to the current target object and the second group corresponding to the control object are the same group. The prompting information may be, for example, text information in a preset style, or may also be that a preset color, a word in a preset style, or the like is displayed around the target object, which is not limited in the embodiment, so long as the specific implementation of the prompting information may be used to indicate that the first packet and the second packet are the same packet.
In one possible implementation, when the first group and the second group are the same group, a preset shooting animation may still be displayed on the graphical user interface, but without causing virtual shooting damage to the target object.
According to the object processing method provided by the embodiment of the disclosure, through object segmentation processing on the first image, each object in the first image can be simply and effectively obtained, the position of each object is determined, then the position of each object in the graphical user interface is compared with the corresponding position of the preset area in the graphical user interface to determine the target object, so that the determined target object is the object aimed by the current shooting operation, the effectiveness of the subsequent virtual shooting operation is ensured, meanwhile, after the target object is determined, the partial image comprising the target object is compared with each image information received in advance, so that the first group of the target object can be rapidly and effectively determined, then the corresponding preset operation is executed according to the first group of the target object and the second group of the control object, and therefore, the AR shooting game can be realized based on terminal equipment, and the operation flexibility of the game can be effectively improved.
Based on the foregoing embodiments, in the object processing method provided by the present disclosure, in addition to the first operation control, a supplementary control may be displayed in the graphical user interface, and it may be understood that an operation of changing a bullet is required in a shooting game, and the supplementary control in this embodiment may be, for example, a control that triggers the operation of changing a bullet.
For example, as may be appreciated in conjunction with fig. 10, fig. 10 is a schematic implementation diagram of a supplemental control provided by an embodiment of the present disclosure.
It will be understood that, in the foregoing schematic diagrams, vertical screen display and operation are displayed, and in the actual implementation process, the vertical screen display and operation may also be displayed as shown in fig. 10, and as shown in fig. 10, a first image may also be displayed in the gui, where the first image includes an object 1004 and an object 1005, where 1003 is a position corresponding to a preset area of the gui, that is, a shooting sight, and 1001 is a first operation control, that is, a shooting control, and these implementation manners are similar to those described above and are not repeated herein.
And in the graphical user interface shown in fig. 10, the method further includes a supplemental control 1002, that is, a bullet changing control shown in fig. 10, and in one possible implementation, for example, the number of controllable targets corresponding to the first terminal device may be updated to a preset number in response to a second touch operation of the supplemental control acting on the graphical user interface.
In this embodiment, the controllable target may be understood as, for example, a bullet performing virtual shooting, for example, referring to fig. 10, assuming that the number of remaining bullets is 8 at the current moment, after responding to the second touch operation applied to the supplemental control 1002, the number of remaining bullets may be updated to a preset number, and in the actual implementation process, the specific implementation manner of the preset number is not particularly limited, and may be selected and set according to the actual requirement.
It will be appreciated that, in the present embodiment, the supplementary control is introduced in a manner of a horizontal screen, and for the implementation manner of displaying and operating the vertical screen, the supplementary control may be provided, and the implementation manner is similar to the implementation manner currently introduced for the horizontal screen, and will not be repeated herein.
By setting the supplementary control and in response to an operation acting on the supplementary control, the number of controllable targets corresponding to the first terminal device may be updated, and operability and integrity of the shooting game may be ensured.
After updating the attribute value corresponding to the target object, the object processing method provided in the embodiment of the present disclosure may further determine whether the updated attribute value corresponding to the target object is less than or equal to the first value, where the first value in this embodiment is a number for measuring whether the object is outgoing, for example, the first value may be 0, and in an actual implementation process, a specific implementation manner of the first value may be selected according to an actual requirement.
In one possible implementation manner, if it is determined that the updated attribute value corresponding to the target object is less than or equal to the first value, the game state corresponding to the target object may be marked as an end state.
For example, when the attribute value of the target object "Zhang Sanu" is 0 or less than 0, it may be determined that the target object "Zhang Sanu" has no life value in the current game, and then the target object "Zhang Sanu" is played from the current game, and then the game state corresponding to the target object "Zhang Sanu" may be marked as an end state to indicate that "Zhang Sanu" is played.
In addition, on the basis of the above embodiment, the object processing method provided in the present disclosure may further determine whether the game is ended after marking the game state corresponding to the target object as the end state, and it may be understood that when all the objects remaining in the game are a team, the game end may be determined.
In one possible implementation manner, according to the respective game states of the objects, the remaining game objects whose game states are not the ending states may be obtained, then the respective groups corresponding to the remaining game objects are determined, and if the remaining game objects are all the same group, ending information is displayed on the screen of the first terminal device, where the ending information is used to indicate that the current game ends.
For example, it can be understood with reference to fig. 11, and fig. 11 is a schematic diagram illustrating implementation of a game state provided by an embodiment of the present disclosure.
For example, the current game-playing objects include Zhang three, liu four, wang two, liu Yi and Song six, and assume Zhang three, liu four as group 1, wang two, liu Yi and Song six as group 2, and assume that the current game states of the respective objects are: zhang three (end state), liu four (end state), wang two (end state), liu Yi (not end state), song six (not end state), then the remaining game objects can be determined to be Liu one and Song six, and Liu Yi and Song six are the same group, then it can be determined that the current game has ended, and end information can be displayed on the screen of the first terminal device.
In the actual implementation process, the specific implementation manner of the ending information may be selected according to the actual requirement, which is not limited in this embodiment, as long as the ending information may indicate that the current game ends.
In another possible implementation manner, if each of the remaining game objects is not in the same group, it may be determined that the current game has not yet ended, and then the operations described in the above embodiments may be performed continuously.
Based on the description of the foregoing embodiments, it may be determined that, in the actual implementation process, the execution subject of each embodiment is the first terminal device, and the terminal devices held by each object in the same game should implement synchronization of data and information, for example, after the current first terminal device updates the attribute value of the target object, the updated information should be synchronized to the other terminal devices, so as to ensure synchronization of game data.
In one possible implementation, the first terminal device may send the game information determined by the first terminal device to the server, so that the server synchronizes the game information to at least one second terminal device, where the second terminal device is a terminal device corresponding to each of the game objects except the control object, and the game information includes at least one of the following: updated attribute values corresponding to the target objects, and game states and end information of each game object.
For example, it can be understood with reference to fig. 12, fig. 12 is a schematic diagram illustrating implementation of game information synchronization provided by an embodiment of the present disclosure.
As shown in fig. 12, the first terminal device 1201 may send game information to the server 1202, and then the server 1202 synchronizes the game information to the second terminal device 1203, the second terminal device 1204, and the second terminal device 1204, where in the actual implementation process, the specific number of the second terminal devices may be selected according to the actual requirement, and all the terminal devices in the same game as the first terminal device may be used as the second terminal device in this embodiment.
For example, the current game-playing object includes Zhang three, liu four, wang two, liu Yi and Song six, and assuming that the terminal devices held by Song six are the first terminal devices, the terminal devices held by Zhang three, liu four, wang two, liu Yi can all be used as the second terminal devices, and it is understood that in the actual implementation process, any terminal device that participates in the game can be used as the first terminal device in the embodiment, and the other terminal devices automatically become the second terminal devices.
In the actual implementation process, the specific implementation mode of the game information can be selected and expanded according to the actual requirement, and it can be understood that any game related information needing to be synchronized can be used as the game information in the embodiment.
By synchronizing game information between the terminal devices, correct implementation of the game can be ensured, so that the effectiveness and correctness of the game can be ensured.
In summary, according to the object processing method disclosed by the embodiment of the present disclosure, the real person simulated shooting game based on AR can be simply and quickly implemented by using the terminal device, and no specific field and device are required, so that the effective flexibility is effectively improved, the game cost can be effectively saved, and the range of the target user is widened.
Fig. 13 is a schematic structural diagram of an object processing apparatus according to an embodiment of the present disclosure. As shown in fig. 13, the object processing apparatus 1300 of the present embodiment may include: a first acquisition module 1301, a second acquisition module 1302, a third acquisition module 1303, a processing module 1304.
A first obtaining module 1301, configured to obtain a first image captured by the image capturing device at a current moment in response to a first touch operation acting on the first operation control;
a second obtaining module 1302, configured to obtain, if the first image includes at least one object, an object position of the at least one object in the first image;
a third obtaining module 1303, configured to obtain a first group corresponding to a target object if it is determined that the target object exists in the at least one object according to an object position of the at least one object, where the target object is located in a position corresponding to a preset area in the graphical user interface;
and a processing module 1304, configured to execute a preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, where the control object is an object associated with the first terminal device.
In a possible implementation manner, the processing module 1304 is specifically configured to:
if the first grouping and the second grouping are different groupings, updating the attribute value corresponding to the target object;
and if the first grouping and the second grouping are the same, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first grouping and the second grouping are the same.
In a possible implementation manner, the processing module 1304 is specifically configured to:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
In a possible implementation manner, the second obtaining module 1302 is specifically configured to:
performing object segmentation processing on the first image, and determining at least one object included in the first image;
a position of each of the objects in the first image is determined.
In a possible implementation manner, the third obtaining module 1303 is specifically configured to:
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image, wherein the size of the first image is the same as the size of the graphical user interface;
And if the position of the object in the graphical user interface exists in the at least one object and is located at the position corresponding to the preset area, determining the object at the position corresponding to the preset area as the target object.
In a possible implementation manner, the processing module 1304 is further configured to:
before a first group corresponding to the target object is acquired, receiving image information of at least one object, an object identifier of each of the at least one object and a group corresponding to each of the at least one object;
and for any one of the objects, storing the image information of the object and the group corresponding to the object in association with the object identification of the object.
In a possible implementation manner, the third obtaining module 1303 is specifically configured to:
acquiring a partial image corresponding to the target object from the first image;
matching the partial image corresponding to the target object with the image information of the at least one object to determine an object identification of the target object;
and determining the group associated with the object identification of the target object as a first group corresponding to the target object.
In a possible implementation manner, the processing module 1304 is further configured to:
And responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of controllable targets corresponding to the first terminal equipment to a preset number.
In a possible implementation manner, the processing module 1304 is further configured to:
and after updating the attribute value corresponding to the target object, marking the game state corresponding to the target object as an ending state if the updated attribute value corresponding to the target object is less than or equal to a first numerical value.
In a possible implementation manner, the processing module 1304 is further configured to:
after marking the game state corresponding to the target object as an end state, acquiring the remaining game objects of which the game states are not the end state according to the respective game states of the objects;
determining respective corresponding groups of the remaining game objects;
and if all the remaining game objects are in the same group, displaying end information on a screen of the first terminal equipment, wherein the end information is used for indicating the end of the current game.
In a possible implementation manner, the processing module 1304 is further configured to:
transmitting the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
Wherein the second terminal device is a terminal device corresponding to each of the game objects except the control object, and the game information includes at least one of the following: and the updated attribute values corresponding to the target objects, the game states of the game objects and the ending information.
The disclosure provides an object processing method and device, which are applied to the field of augmented reality in computer technology to achieve the purpose of improving the flexibility of game operation.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 14 shows a schematic block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the electronic device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
Various components in device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the respective methods and processes described above, for example, an object processing method. For example, in some embodiments, the object processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into the RAM 1403 and executed by the computing unit 1401, one or more steps of the object processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured to perform the object handling method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present application may be performed in parallel or sequentially or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (22)

1. An object processing method is applied to a first terminal device comprising an image pickup device and a screen, wherein the image pickup device is used for shooting images, the screen is used for displaying a graphical user interface, the graphical user interface comprises the images shot by the image pickup device and a first operation control, and the method comprises the following steps:
responding to a first touch operation acting on the first operation control, and acquiring a first image shot by the camera device at the current moment;
If the first image comprises at least one object, performing object segmentation processing on the first image, and determining at least one object included in the first image;
determining the position of each object in the first image;
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image, if the position of the object in the graphical user interface exists in the at least one object and is located at the position corresponding to the preset area, determining the object at the position corresponding to the preset area as a target object, and when a plurality of objects are included at the position corresponding to the preset area, determining that the target object is an object with the position of the object coincident with the center point of the position corresponding to the preset area, an object with the position of the object closest to the center point of the position corresponding to the preset area, or an object with the largest area located at the position corresponding to the preset area;
acquiring a first group corresponding to the target object, wherein the target object is positioned at a position corresponding to a preset area in the graphical user interface;
and executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment.
2. The method of claim 1, wherein the performing a preset operation according to the first group corresponding to the target object and the second group corresponding to the control object includes:
if the first grouping and the second grouping are different groupings, updating the attribute value corresponding to the target object;
and if the first grouping and the second grouping are the same, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first grouping and the second grouping are the same.
3. The method of claim 2, wherein the updating the attribute value corresponding to the target object comprises:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
4. The method of claim 1, wherein the size of the first image and the size of the graphical user interface are the same.
5. The method according to any one of claims 2-4, further comprising, prior to obtaining the first packet corresponding to the target object:
Receiving image information of at least one object, an object identifier of each of the at least one object, and a packet corresponding to each of the at least one object;
and for any one of the objects, storing the image information of the object and the group corresponding to the object in association with the object identification of the object.
6. The method of claim 5, wherein the obtaining the first packet corresponding to the target object comprises:
acquiring a partial image corresponding to the target object from the first image;
matching the partial image corresponding to the target object with the image information of the at least one object to determine an object identification of the target object;
and determining the group associated with the object identification of the target object as a first group corresponding to the target object.
7. The method of claim 1, the method further comprising:
and responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of controllable targets corresponding to the first terminal equipment to a preset number.
8. The method of claim 2, after the updating the attribute value corresponding to the target object, the method further comprising:
And if the updated attribute value corresponding to the target object is smaller than or equal to the first numerical value, marking the game state corresponding to the target object as an ending state.
9. The method of claim 8, after marking the game state corresponding to the target object as an end state, the method further comprising:
according to the respective game states of the objects, acquiring the remaining game objects of which the game states are not the ending states;
determining respective corresponding groups of the remaining game objects;
and if all the remaining game objects are in the same group, displaying end information on a screen of the first terminal equipment, wherein the end information is used for indicating the end of the current game.
10. The method of claim 9, the method further comprising:
transmitting the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
wherein the second terminal device is a terminal device corresponding to each of the game objects except the control object, and the game information includes at least one of the following: and the updated attribute values corresponding to the target objects, the game states of the game objects and the ending information.
11. An object processing apparatus applied to a first terminal device including an image pickup apparatus for capturing an image and a screen for displaying a graphical user interface including the image captured by the image pickup apparatus and a first operation control, comprising:
the first acquisition module is used for responding to a first touch operation acted on the first operation control and acquiring a first image shot by the camera device at the current moment;
the second acquisition module is used for acquiring the object position of at least one object in the first image if the first image comprises the at least one object;
a third obtaining module, configured to obtain a first group corresponding to a target object if it is determined that the target object exists in the at least one object according to an object position of the at least one object, where the target object is located at a position corresponding to a preset area in the graphical user interface;
the processing module is used for executing preset operation according to the first group corresponding to the target object and the second group corresponding to the control object, wherein the control object is an object associated with the first terminal equipment;
The second obtaining module is specifically configured to:
performing object segmentation processing on the first image, and determining at least one object included in the first image;
determining the position of each object in the first image;
the third obtaining module is specifically configured to:
determining the position of each object in the graphical user interface according to the object position of the at least one object in the first image;
if the position of the object in the graphical user interface is located at the position corresponding to the preset area, determining the object at the position corresponding to the preset area as the target object, and when a plurality of objects are included at the position corresponding to the preset area, the target object is an object with the position of the object coincident with the center point of the position corresponding to the preset area, the position of the object is the object closest to the center point of the position corresponding to the preset area, or the object with the largest area is located at the position corresponding to the preset area.
12. The apparatus of claim 11, wherein the processing module is specifically configured to:
if the first grouping and the second grouping are different groupings, updating the attribute value corresponding to the target object;
And if the first grouping and the second grouping are the same, displaying prompt information in the graphical user interface, wherein the prompt information is used for indicating that the first grouping and the second grouping are the same.
13. The apparatus of claim 12, wherein the processing module is specifically configured to:
acquiring an attribute value corresponding to the target object;
acquiring a preset value corresponding to the first operation control;
subtracting the preset value from the attribute value corresponding to the target object to obtain an updated attribute value corresponding to the target object.
14. The apparatus of claim 11, wherein a size of the first image and a size of the graphical user interface are the same.
15. The apparatus of any of claims 12-14, the processing module further to:
before a first group corresponding to the target object is acquired, receiving image information of at least one object, an object identifier of each of the at least one object and a group corresponding to each of the at least one object;
and for any one of the objects, storing the image information of the object and the group corresponding to the object in association with the object identification of the object.
16. The apparatus of claim 15, wherein the third acquisition module is specifically configured to:
acquiring a partial image corresponding to the target object from the first image;
matching the partial image corresponding to the target object with the image information of the at least one object to determine an object identification of the target object;
and determining the group associated with the object identification of the target object as a first group corresponding to the target object.
17. The apparatus of claim 11, the processing module further to:
and responding to a second touch operation of a supplementary control acting on the graphical user interface, and updating the number of controllable targets corresponding to the first terminal equipment to a preset number.
18. The apparatus of claim 12, the processing module further to:
and after updating the attribute value corresponding to the target object, marking the game state corresponding to the target object as an ending state if the updated attribute value corresponding to the target object is less than or equal to a first numerical value.
19. The apparatus of claim 18, the processing module further to:
after marking the game state corresponding to the target object as an end state, acquiring the remaining game objects of which the game states are not the end state according to the respective game states of the objects;
Determining respective corresponding groups of the remaining game objects;
and if all the remaining game objects are in the same group, displaying end information on a screen of the first terminal equipment, wherein the end information is used for indicating the end of the current game.
20. The apparatus of claim 19, the processing module further to:
transmitting the game information determined by the first terminal device to a server so that the server synchronizes the game information to at least one second terminal device,
wherein the second terminal device is a terminal device corresponding to each of the game objects except the control object, and the game information includes at least one of the following: and the updated attribute values corresponding to the target objects, the game states of the game objects and the ending information.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
CN202110898692.6A 2021-08-05 2021-08-05 Object processing method and device Active CN113577766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110898692.6A CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110898692.6A CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Publications (2)

Publication Number Publication Date
CN113577766A CN113577766A (en) 2021-11-02
CN113577766B true CN113577766B (en) 2024-04-02

Family

ID=78255572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110898692.6A Active CN113577766B (en) 2021-08-05 2021-08-05 Object processing method and device

Country Status (1)

Country Link
CN (1) CN113577766B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092675A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Image display method, image display device, electronic apparatus, and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010041034A1 (en) * 2008-10-09 2010-04-15 Isis Innovation Limited Visual tracking of objects in images, and segmentation of images
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN108066981A (en) * 2016-11-12 2018-05-25 金德奎 A kind of AR or MR method for gaming identified based on position and image and system
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN111672119A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for aiming virtual object
CN111821691A (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Interface display method, device, terminal and storage medium
CN112107858A (en) * 2020-09-17 2020-12-22 腾讯科技(深圳)有限公司 Prop control method and device, storage medium and electronic equipment
CN112138385A (en) * 2020-10-28 2020-12-29 腾讯科技(深圳)有限公司 Aiming method and device of virtual shooting prop, electronic equipment and storage medium
WO2021031755A1 (en) * 2019-08-19 2021-02-25 Oppo广东移动通信有限公司 Interactive method and system based on augmented reality device, electronic device, and computer readable medium
CN112991157A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044912A1 (en) * 2011-08-19 2013-02-21 Qualcomm Incorporated Use of association of an object detected in an image to obtain information to display to a user
US10509533B2 (en) * 2013-05-14 2019-12-17 Qualcomm Incorporated Systems and methods of generating augmented reality (AR) objects

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010041034A1 (en) * 2008-10-09 2010-04-15 Isis Innovation Limited Visual tracking of objects in images, and segmentation of images
CN108066981A (en) * 2016-11-12 2018-05-25 金德奎 A kind of AR or MR method for gaming identified based on position and image and system
CN106984043A (en) * 2017-03-24 2017-07-28 武汉秀宝软件有限公司 The method of data synchronization and system of a kind of many people's battle games
CN109701280A (en) * 2019-01-24 2019-05-03 网易(杭州)网络有限公司 The control method and device that foresight is shown in a kind of shooting game
WO2021031755A1 (en) * 2019-08-19 2021-02-25 Oppo广东移动通信有限公司 Interactive method and system based on augmented reality device, electronic device, and computer readable medium
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN111672119A (en) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for aiming virtual object
CN111821691A (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Interface display method, device, terminal and storage medium
CN112107858A (en) * 2020-09-17 2020-12-22 腾讯科技(深圳)有限公司 Prop control method and device, storage medium and electronic equipment
CN112138385A (en) * 2020-10-28 2020-12-29 腾讯科技(深圳)有限公司 Aiming method and device of virtual shooting prop, electronic equipment and storage medium
CN112991157A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113577766A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US20200393953A1 (en) Method and apparatus, computer device, and storage medium for picking up a virtual item in a virtual environment
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
CN111672109B (en) Game map generation method, game testing method and related device
CN110465087B (en) Virtual article control method, device, terminal and storage medium
KR102629359B1 (en) Virtual object attack prompt method and device, and terminal and storage medium
JP2021503138A (en) Image processing methods, electronic devices, and storage media
CN108970116B (en) Virtual role control method and device
US20210001221A1 (en) Video game with automated screen shots
CN111228821B (en) Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
US11918900B2 (en) Scene recognition method and apparatus, terminal, and storage medium
US10029180B2 (en) Storage medium having stored therein game program, game apparatus, game system, and game processing method
CN108211363B (en) Information processing method and device
JP2022540283A (en) VIRTUAL OBJECT CONTROL METHOD, APPARATUS, DEVICE, AND COMPUTER PROGRAM IN VIRTUAL SCENE
CN113350802A (en) Voice communication method, device, terminal and storage medium in game
CN113577766B (en) Object processing method and device
CN110860087A (en) Virtual object control method, device and storage medium
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
WO2022083451A1 (en) Skill selection method and apparatus for virtual object, and device, medium and program product
WO2022267589A1 (en) Game display method and apparatus, and electronic device and storage medium
WO2021244237A1 (en) Virtual object control method and apparatus, computer device, and storage medium
CN112843719A (en) Skill processing method, skill processing device, storage medium and computer equipment
CN111097170B (en) Method and device for adjusting adsorption frame, storage medium and electronic device
CN113680058B (en) Use method, device, equipment and storage medium for restoring life value prop
CN113633970A (en) Action effect display method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant