CN112742026B - Game control method, game control device, storage medium and electronic equipment - Google Patents

Game control method, game control device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112742026B
CN112742026B CN202010188645.8A CN202010188645A CN112742026B CN 112742026 B CN112742026 B CN 112742026B CN 202010188645 A CN202010188645 A CN 202010188645A CN 112742026 B CN112742026 B CN 112742026B
Authority
CN
China
Prior art keywords
image
key
game interface
game
bullet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010188645.8A
Other languages
Chinese (zh)
Other versions
CN112742026A (en
Inventor
王洁梅
张力柯
李旭冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010188645.8A priority Critical patent/CN112742026B/en
Publication of CN112742026A publication Critical patent/CN112742026A/en
Application granted granted Critical
Publication of CN112742026B publication Critical patent/CN112742026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a game control method, a game control device, a storage medium and electronic equipment, belongs to the technical field of computers, and relates to the artificial intelligence and computer vision technologies. When the game interface is determined to be blocked by the bullet frame, a game interface image is acquired, an operation area corresponding to an operation key of the bullet frame is determined in the game interface image, and corresponding operation is performed on the bullet frame in the game interface according to the determined operation area, so that the game can continue to run automatically.

Description

Game control method, game control device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technology, and more particularly, to a game control method, apparatus, storage medium, and electronic device.
Background
An electronic game (hereinafter, simply referred to as a game) is a game that runs depending on a platform of an electronic device, and can be classified into a network game and a stand-alone game. In the game running process, a bullet frame, such as a function introduction bullet frame, a bullet frame for users to select different operations, an advertisement bullet frame and the like, often appears in a game interface at random.
In some scenarios, the game needs to be run automatically. For example, games need to be tested during development and prior to marketing to verify that they are functioning properly. When testing a game, the game needs to be automatically operated.
In the automatic running process of the game, if a popup frame is popped up in the game interface, the game can be blocked by the popup frame to stop running, and the aim of testing cannot be achieved.
Disclosure of Invention
In order to solve the existing technical problems, embodiments of the present application provide a game control method, a game control device, a storage medium, and an electronic device, which can alleviate the problem that a game is blocked by a pop-up frame and stops the automatic operation.
In order to achieve the above purpose, the technical solution of the embodiments of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a game control method, including:
when the game interface is determined to be blocked by the bullet frame, a game interface image is acquired;
according to the priority order of the bullet frame template library, preferentially acquiring bullet frame template images from the bullet frame template library with high priority, and matching the acquired bullet frame template images with the game interface images;
if the game interface image is matched with the bullet frame template image obtained from the bullet frame template library, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to a target area where the operation key in the matched bullet frame template image is located;
and executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
In a second aspect, embodiments of the present application provide a game control method, including:
when the game interface is determined to be blocked by the bullet frame, a game interface image is acquired;
acquiring a bullet frame template image from a bullet frame template library, and matching the acquired bullet frame template image with the game interface image;
if the game interface image is matched with the bullet frame template image obtained from the bullet frame template library, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to a target area where the operation key in the matched bullet frame template image is located;
if all the bullet frame template images in the bullet frame template library are not matched in the game interface image, determining an operation area corresponding to an operation key of the bullet frame in the game interface image through a trained key identification model;
and executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
In a third aspect, an embodiment of the present application provides a game control device, including:
the image acquisition unit is used for acquiring a game interface image when determining that the game interface is blocked by the bullet frame;
the image matching unit is used for preferentially acquiring the bullet frame template images from the bullet frame template library with high priority according to the priority order of the bullet frame template library, and matching the acquired bullet frame template images with the game interface images;
An operation area determining unit, configured to determine, in the game interface image, an operation area corresponding to an operation key of the bullet frame according to a target area where the operation key in the matched bullet frame template image is located if the bullet frame template image obtained from the bullet frame template library is matched in the game interface image;
and the operation execution unit is used for executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
In an alternative embodiment, the number of the bullet frame template libraries is at least two, each bullet frame template library is divided according to the types of the bullet frames, the types of the bullet frames corresponding to the bullet frame template images included in each bullet frame template library are the same, the bullet frames corresponding to the bullet frame types of the high-priority bullet frame template library, and the pop probability in the game is higher than that of the bullet frames corresponding to the bullet frame types of the low-priority bullet frame template library.
In an alternative embodiment, the types of the bullet frames at least comprise game bullet frames and equipment bullet frames, the bullet frame template library at least comprises a game bullet frame template library and an equipment bullet frame template library, and the priority of the game bullet frame template library is higher than that of the equipment bullet frame template library.
In an alternative embodiment, the image matching unit is specifically configured to:
traversing the game interface image by adopting a sliding window according to a set step length to obtain a plurality of window areas; the sliding window is the same as the acquired elastic frame template image in size;
determining the maximum similarity value in the similarity values of the acquired elastic frame template image and the window areas;
and when the maximum similarity value meets the set confidence coefficient, determining that a window area corresponding to the maximum similarity value is matched with the acquired elastic frame template image.
In an alternative embodiment, the image matching unit is specifically configured to:
determining a similarity value of the acquired elastic frame template image and each window area;
storing the similarity value of the acquired elastic frame template image and each window area as a similarity matrix according to the position of each window area in the game interface image;
and selecting the maximum similarity value from the similarity matrix.
In an alternative embodiment, the operation region determining unit is further configured to:
if all the bullet frame template images in each bullet frame template library are not matched in the game interface image, determining an operation area corresponding to the operation keys of the bullet frame in the game interface image through a trained key identification model.
In an alternative embodiment, the key recognition model includes a feature extraction network and a regression network; the operation region determination unit is further configured to:
extracting features of the game interface image through the feature extraction network to obtain a feature map of the game interface image;
inputting the obtained feature map into the regression network to obtain key positions of operation keys contained in the game interface image output by the regression network;
and determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network.
In an alternative embodiment, the key recognition model further comprises a classification network; after obtaining the feature map of the game interface image, the operation area determining unit is further configured to:
inputting the obtained feature map into the classification network to obtain the key type corresponding to the operation key at each key position output by the classification network;
determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network, wherein the operation area comprises:
if the regression network outputs a plurality of key positions, selecting a key type with the highest click priority from the key types output by the classification network according to the click priority of the preset key types;
And taking the key position corresponding to the operation key of the key type with the highest click priority as an operation area.
In an alternative embodiment, the image acquisition unit is further configured to:
setting time length at each interval, and acquiring a frame of game interface image;
comparing the current frame of game interface image with the previous frame of game interface image;
if the similarity between the current frame of game interface image and the previous frame of game interface image reaches a set threshold value, determining that the game interface is blocked by the flick frame.
In an alternative embodiment, the apparatus further comprises a model training unit for:
extracting training images containing keys from a training sample set; the training image is marked with a key label;
inputting the extracted training image into a key recognition model to obtain a key recognition result of the training image;
determining a loss value according to the recognition result of the training image and the key label of key recognition;
and adjusting the parameters of the key identification model according to the loss value until the loss value converges to a preset expected value, so as to obtain the trained key identification model.
In a fourth aspect, embodiments of the present application provide a game control device, including:
The image acquisition unit is used for acquiring a game interface image when determining that the game interface is blocked by the bullet frame;
the image matching unit is used for acquiring a bullet frame template image from the bullet frame template library and matching the acquired bullet frame template image with the game interface image;
an operation area determining unit, configured to determine, if an operation area corresponding to an operation key of a frame in the game interface image matches a frame template image acquired from the frame template library in the game interface image, according to a target area where the operation key in the matched frame template image is located; if all the bullet frame template images in the bullet frame template library are not matched in the game interface image, determining an operation area corresponding to an operation key of the bullet frame in the game interface image through a trained key identification model;
and the operation execution unit is used for executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
In a fifth aspect, embodiments of the present application further provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements the game control method of the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application further provide an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the game control method of the first aspect or the second aspect is implemented.
According to the game control method, the game control device, the storage medium and the electronic equipment, when the fact that the game interface is clamped by the bullet frame is determined, the game interface image is obtained, the operation area corresponding to the operation key of the bullet frame is determined in the game interface image, and corresponding operation is performed on the bullet frame in the game interface according to the determined operation area, so that the game can continue to run automatically.
In one embodiment, the game interface image may be matched with a cartridge template image in a cartridge template library. The popup probability of the popup frame corresponding to the popup frame template image in the game can be set to be higher by the popup frame template library, wherein the popup probability of the popup frame corresponding to the popup frame template image in the game is higher. When matching is performed, the game interface images are matched with the bullet frame template images acquired from the bullet frame template library with high priority according to the priority order of the bullet frame template library, so that the bullet frame templates are more easily matched, the matching efficiency can be improved, and the bullet frame processing speed can be improved.
In another embodiment, the game interface image may be matched with a frame template image in the frame template library, and when a frame that is not stored in the frame template library appears in the game interface, an operation area corresponding to an operation key of the frame may be determined in the game interface image through the trained key recognition model. Through the trained key recognition model, the game interface can realize wider recognition, more recognize the probability of the bullet frame in the game interface, and further reduce the occurrence of the phenomenon that the game is blocked by the bullet frame.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a game control method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a game control method according to an embodiment of the present application;
FIG. 3 is a schematic view of a frame in a game interface according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of determining that a game interface is blocked by a spring frame according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an execution process of a game interface blocked by a spring frame according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of matching a frame template image with a game interface image according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for matching a frame template image with a game interface image according to an embodiment of the present disclosure;
FIG. 8 is a schematic view of another interface for matching a frame template image with a game interface image according to an embodiment of the present disclosure;
FIG. 9 is a schematic view of an interface for matching a frame template image with a game interface image according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an interface between a game cartridge and a device cartridge according to an embodiment of the present disclosure;
FIG. 11 is a flowchart of another game control method according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a key identification model according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of another key identification model according to an embodiment of the present application;
fig. 14 is a flowchart of a training process of a key recognition model according to an embodiment of the present application;
FIG. 15 is a schematic illustration of a labeling interface of a training image according to an embodiment of the present disclosure;
FIG. 16 is a flow chart of another game control method according to an embodiment of the present disclosure;
FIG. 17 is a block diagram of a game control device according to an embodiment of the present application;
FIG. 18 is a block diagram of another game control device according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprises" and "comprising," along with their variants, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Some of the terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) UI (User Interface): the game interface displayed to the user during the running process of the game can also be called GUI (Graphic User Interface ), and the game can realize interaction with the user through the UI interface. In the embodiment of the application, the game interface in the running process is displayed although the game is automatically run.
(2) Spring frame: and the message prompt box pops up on the game interface in the game running process. The pop-up frame generally includes an operable key, such as a close key, a return key, a next key, etc., and the embodiments of the present application may enable the game to continue by operating these keys.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms "first," "second," and the like herein are used for descriptive purposes only and are not to be construed as either explicit or implicit relative importance or to indicate the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
An application scenario of the game control method provided in the embodiment of the present application may be shown in fig. 1, where the application scenario includes a terminal device 11 and a data processing server 12. The terminal device 11 and the data processing server 12 may be connected by a wired connection or a wireless connection, and transmit data. For example, the terminal device 11 and the data processing server 12 may be connected by a data line or by a wired network; the terminal device 11 and the data processing server 12 may also be connected through a radio frequency module, a bluetooth module, or a wireless network.
The terminal device 11 may be a device for developing a game, for example, a computer, a notebook, a tablet computer, or other terminal devices capable of performing game development, etc. The terminal device 11 holds an installation package of the game thereon, and can transmit the installation package of the game to the data processing server 12. The data processing server 12 is used for testing the game according to the installation package of the game, and the data processing server 12 can be a server or a server cluster or cloud computing center formed by a plurality of servers, or a virtualization platform.
The above scenario is only one application scenario in the embodiments of the present application, and in other application scenarios, the terminal device may also perform an automatic test on the game.
In the automatic game test or other automatic game running process, if a popup frame is popped up in a game interface, the game is blocked by the popup frame and is stopped, and the automatic running process cannot be completed. In order to solve the problem, the embodiment of the application provides a game control method, a game control device, a storage medium and an electronic device, when determining that a game interface is blocked by a bullet frame, a game interface image is acquired, an operation area corresponding to an operation key of the bullet frame is determined in the game interface image, and corresponding operation is performed on the bullet frame in the game interface according to the determined operation area, so that the game can continue to run automatically.
Embodiments of the present application relate to artificial intelligence (Artificial Intelligence, AI) and Machine Learning techniques, designed based on Computer Vision (CV) techniques and Machine Learning (ML) techniques in artificial intelligence.
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. Artificial intelligence techniques mainly include computer vision techniques, speech processing techniques, machine learning/deep learning, and other directions.
With research and progress of artificial intelligence technology, artificial intelligence is being developed in various fields such as common smart home, image retrieval, video monitoring, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicle, robot, smart medical, etc., and it is believed that with the development of technology, artificial intelligence will be applied in more fields and be of increasing importance.
Computer vision technology is an important application of artificial intelligence, which studies related theories and techniques in an attempt to build artificial intelligence systems capable of acquiring information from images, video or multidimensional data, in place of human visual interpretation. Typical computer vision techniques generally include image processing and video analysis. The embodiment of the application relates to identification of operation keys in a game interface, and belongs to a method for image processing.
Machine learning is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, and the like. In the operation key identification process, a key identification model based on machine learning or deep learning is adopted, and the operation keys in the image are identified according to the image characteristics of the UI interface, so that the key types and key positions of the operation keys contained in the input game interface image can be identified.
Fig. 2 is a flow chart of a game control method according to an embodiment of the present application, where the method may be performed by the data processing server 12 in fig. 1, or may be performed by a terminal device or other electronic devices. As shown in fig. 2, the game control method includes the steps of:
step S201, when the game interface is determined to be blocked by the bullet frame, a game interface image is acquired.
Whether the game interface is clamped by the bullet frame can be determined by comparing whether the images of the front frame and the back frame of the game interface are similar, or whether the game interface is clamped by the bullet frame can be determined by determining whether the game progress data exceeds a set time period and is not updated. And when the game interface is determined to be blocked by the bullet frame, acquiring a currently running game interface image.
The acquired game interface image may be an image of the entire UI interface currently displayed, or may be an image of a designated portion taken from an image of the entire UI interface currently displayed. Considering that a box in a game generally pops up centrally, in some embodiments, the image of the designated portion may be an image of the middle portion of the entire UI interface. For example, a game interface image of a specified length and a specified width may be truncated in the entire UI, and a center point of the resulting game interface image coincides with a center point of the entire UI image.
Step S202, according to the priority order of the bullet frame template library, preferentially acquiring bullet frame template images from the bullet frame template library with high priority, and matching the acquired bullet frame template images with game interface images.
The priority of the bullet frame template libraries is positively correlated with the pop-up probability of the bullet frames corresponding to the bullet frame template images in the bullet frame template libraries in the game.
Specifically, images of the bullet frames that may be ejected during the game play may be collected in advance, and the collected images are saved as bullet frame template images in a bullet frame template library. Multiple bullet frame template libraries can be set, different priorities are set for each bullet frame template library, and the higher the pop-up probability of a bullet frame corresponding to a bullet frame template image in a game, the higher the priority of the bullet frame template library.
When matching the game interface image with the bullet frame template image obtained from the bullet frame template library, the game interface image is matched with the bullet frame template image obtained from the bullet frame template library with high priority according to the priority order of the bullet frame template library.
For example, consider that in a game, the pop-up probability is highest for a game box for showing relevant information in the course of the game, and also possible to pop-up for a device box for showing relevant information of the device. For example, during the game running process, a box asking the user whether to share the location may be popped up, a message box prompting the user about the resources currently owned by the user may be popped up, or a message box prompting the user about the usage or acquisition path of a certain game resource may be popped up, where all boxes are boxes related to game information, which are called game boxes. In the game running process, if the equipment monitors that hardware changes, for example, a USB interface is accessed by a data line, a bullet frame for inquiring whether charging or data transmission is needed can be popped up; or when the device monitors that the earphone interface is accessed by a data line, the device may pop up the pop-up frame of the microphone or the earphone; or when the electric quantity is monitored by the equipment to be too low, the bullet frames which enter the power saving mode can be popped up, and the bullet frames are all bullet frames related to the equipment and are called equipment bullet frames.
Because in game design, the user is often prompted about the game through a bullet box, the pop-up probability of the game bullet box is usually high, for example, a game bullet box may pop up for several minutes or more than ten minutes during the game running process. The probability of the hardware of the device changing is relatively low, and particularly, during the game running process, a user cannot frequently plug and pull the data line, so that the device popup frame is less, and a device popup frame cannot be popped up even for tens of minutes or hours. Therefore, the pop-up probability of the device frame is much lower than that of the game frame during the game running process.
Thus, in one embodiment, as shown in fig. 10, the types of the bullet frames may include game bullet frames and device bullet frames, the bullet frame template library may include a game bullet frame template library and a device bullet frame template library, the game bullet frame template library includes template images of respective game bullet frames, the device bullet frame template library includes template images of respective device bullet frames, and the game bullet frame template library has a higher priority than the device bullet frame template library.
Step S203, if the game interface image is matched with the bullet frame template image obtained from the bullet frame template library, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to the target area where the operation key in the matched bullet frame template image is located.
For example, as shown in fig. 3, in the frame template image matched with the game interface image, the "know" operation key 31 exists in the frame template image, and according to the target area of the "know" operation key 31 in the frame template image and the position of the frame template image in the game interface image, the corresponding operation area of the "know" operation key 31 in the game interface image can be determined.
Step S204, corresponding operation is carried out on the bullet frames in the game interface according to the determined operation areas.
The determined operation area may be clicked, and the bullet frame in the game interface may be closed, so that the game continues to run automatically, or if the determined operation area is the operation area corresponding to the minimized key, the bullet frame may be minimized by controlling the operation area, so that the operation is transferred to the background, so that the game continues to run in the foreground.
According to the game control method, when the game interface is determined to be blocked by the bullet frame, the game interface image is obtained, the operation area corresponding to the operation key of the bullet frame is determined in the game interface image by matching the game interface image with the bullet frame template image in the bullet frame template library, and corresponding operation is performed on the bullet frame in the game interface according to the determined operation area, so that the game can continue to run automatically.
The priority can be set according to the pop-up probability of the pop-up frame corresponding to the pop-up frame template image in the game, and the higher the pop-up probability of the pop-up frame corresponding to the pop-up frame template image in the game, the higher the priority of the pop-up frame template library. Compared with the method that all the bullet frame template images are stored in one bullet frame template library, the number of bullet frame template images in each bullet frame template library in the scheme is relatively small, when matching is carried out, the priority game interface images are matched with the bullet frame template images acquired from the bullet frame template library with high priority according to the priority sequence of the bullet frame template library, so that the matching efficiency can be improved, and the processing speed of the bullet frames can be improved.
In one embodiment, it may be determined whether the game interface is caught by the chase by the method shown in fig. 4, including the steps of:
step S2011, a frame of game interface image is obtained every time a time length is set.
During the automatic running process of the game, a frame of game interface image is acquired every time a set time length is set, for example, the set time length can be 2 seconds, 10 seconds or 20 seconds. The acquired game interface image may be an image of the entire UI interface or an image of a designated portion cut out from the image of the entire UI interface as shown in fig. 5. Wherein the designated portion may be an intermediate portion of the UI interface.
Step S2012, the current frame game interface image is compared with the previous frame game interface image.
For example, a method of directly comparing pixel points at the same position may be adopted to calculate a difference value of pixel values of pixel points at the same position in the current frame game interface image and the previous frame game interface image, and if the difference value is within the set range, the two pixel points at the same position may be considered to be the same pixel point.
Step S2013, judging whether the similarity between the current frame of game interface image and the previous frame of game interface image reaches a set threshold value; if yes, go to step S2014; if not, the process returns to step S2011.
In one embodiment, if the number of identical pixels in two frames of game interface images reaches a set number, the similarity between the current frame of game interface image and the previous frame of game interface image may be considered to reach a set threshold.
In another embodiment, the similarity between the current frame of game interface image and the previous frame of game interface image can be determined according to the proportion of the same pixel points in all the pixel points, and the obtained similarity is compared with a set threshold value. Illustratively, the set threshold may be 80% or 90%. Or if the proportion of the same pixel points in all the pixel points reaches a set proportion, the similarity between the current frame game interface image and the previous frame game interface image can be considered to reach a set threshold, wherein the set proportion can be 80% or 90%.
Step S2014, determining that the game interface is blocked by the bullet frame.
If the similarity between the current frame of game interface image and the previous frame of game interface image reaches a set threshold value, the two frames of images acquired before and after the game interface image is similar to each other, and the game interface can be determined to be in a state of being blocked by the flick frame. For example, the similarity of the front and rear two frame images shown in fig. 5 is calculated to be 96%, the two frame images are almost identical, and it can be determined that the game interface is in a state of being caught by the flick frame.
If the game interface is in a state of being blocked by the popup frame, the operation area which can be clicked in the game interface needs to be known in the next step, and the popup frame can be closed. In one embodiment, a popup frame which may pop up in a game may be collected in advance, a popup frame sample library is established, an operation area corresponding to an operation key of the popup frame in a game interface is searched by adopting a template matching method, and the found operation area is clicked; and the feature point matching or other matching algorithms can be adopted to search the operation area corresponding to the operation key of the bullet frame in the game interface. Specifically, a plurality of bullet frame template libraries can be established according to the types of the bullet frames, each bullet frame template library is divided according to the types of the bullet frames, the types of the bullet frames corresponding to the bullet frame template images included in each bullet frame template library are the same, the bullet frames corresponding to the bullet frame types of the high-priority bullet frame template library have higher pop-up probability in the game than the bullet frames corresponding to the bullet frame types of the low-priority bullet frame template library. When matching is performed, the bullet frame template images can be preferentially obtained from the bullet frame template library with high priority according to the priority order of the bullet frame template library, and the obtained bullet frame template images are matched with the game interface images.
Specific implementation procedures adopting the template matching method are described in detail below. The theme of template matching is to find the part in the game interface image that matches the bullet frame template image best. In one embodiment, the matching of the acquired frame template image with the game interface image may be performed according to the method shown in fig. 6, including the steps of:
step S2021, traversing the game interface image by adopting the sliding window according to the set step length to obtain a plurality of window areas.
For example, as shown in fig. 7 or 8, the game interface image may be traversed by a sliding window from the top left corner of the game interface image, from left to right, and from top to bottom, with a step size of 1, resulting in a plurality of window regions. The size of the sliding window is the same as that of the acquired elastic frame template image. As shown in fig. 7, the obtained bullet frame template image is template a, and a sliding window with the same size as the template a is adopted to intercept a window area from the game interface image. As shown in fig. 8, the obtained bullet frame template image is template b, and a sliding window with the same size as the template b is adopted to intercept a window area from the game interface image.
Step S2022, determining the maximum similarity value among the similarity values of the acquired elastic frame template image and the plurality of window regions.
And comparing the elastic frame template image with each window area in sequence, and determining the similarity value of the elastic frame template image and each window area. The larger the similarity value is, the more similar the window area is to the elastic frame template image, and the window area corresponding to the maximum similarity value is the area which is the most matched with the elastic frame template image and can be regarded as the best match.
In one embodiment, the frame template image may be compared with each window area in sequence to determine a similarity value between the frame template image and each window area. And storing the similarity value of the elastic frame template image and each window area as a similarity matrix according to the position of each window area in the game interface image, and selecting the maximum similarity value from the similarity matrix.
In another embodiment, the frame template image may be compared with each window area in sequence, and a similarity value between the frame template image and each window area may be determined. And storing the similarity value corresponding to the first window area to be compared as a maximum similarity value, and updating the stored maximum similarity value into the new obtained similarity value if the new obtained similarity value is larger than the stored maximum similarity value in the process of comparing the elastic frame template image with each window area one by one until the similarity calculation of the elastic frame template image and all window areas is completed.
For any window area in the game interface image, the similarity value between the bullet frame template image and the window area can be determined by adopting any one of the following matching methods.
First, the squared difference matching algorithm (cv_tm_sqdiff): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
R(x,y)=∑ x′,y′ (T(x′,y′)-I(x+x′,y+y′)) 2
Wherein T (x ', y') represents the pixel value of each pixel point in the frame template image, and I (x+x ', y+y') represents the pixel value of each pixel point in the window region. The best match for the square difference matching algorithm is 0. If the matching degree value of the elastic frame template image and the window area is larger, the matching of the elastic frame template image and the window area is poorer. Therefore, the difference between the obtained matching degree values R (x, y) and 1 can be adopted as the similarity value between the frame template image and the window region.
Second, standard square difference matching algorithm (cv_tm_sqdiff_normal): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
The best match for the standard squared difference matching algorithm is 0. If the matching degree value of the elastic frame template image and the window area is larger, the matching of the elastic frame template image and the window area is poorer. Therefore, the difference between the obtained matching degree values R (x, y) and 1 can be adopted as the similarity value between the frame template image and the window region.
Third, the first correlation matching algorithm (cv_tm_ccorr): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
R(x,y)=∑ x′,y′ (T(x′,y′)·I(x+x′,y+y′))
The larger the matching degree value obtained by the first correlation matching algorithm is, the higher the matching degree between the elastic frame template image and the window area is, and 0 represents the worst matching result. Therefore, the obtained matching degree value R (x, y) can be directly used as the similarity value of the bullet frame template image and the window area.
Fourth, the first standard correlation matching algorithm (cv_tm_ccorr_normalized): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
The larger the matching degree value obtained by the first standard correlation matching algorithm is, the higher the matching degree between the elastic frame template image and the window area is, and 0 represents the worst matching result. Therefore, the obtained matching degree value R (x, y) can be directly used as the similarity value of the bullet frame template image and the window area.
Fifth, the second correlation matching algorithm (cv_tm_ccorr1): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
R(x,y)=∑ x′,y′ (T′(x′,y′)·I′(x+x′,y+y′))
Wherein T ' (x ', y ') =t (x ', y ') -1/(w·h) ·Σtherein x",y" T(x",y");
I′(x+x′,y+y′)=I(x+x′,y+y′)-1/(w·h)·∑ x",y" I(x+x",y+y")
w.h is a coefficient, T (x ", y") represents a pixel value of each pixel in the frame template image, and I (x+x ", y+y") represents a pixel value of each pixel in the window region. The larger the matching degree value obtained by the second correlation matching algorithm, the higher the matching degree between the elastic frame template image and the window area, the matching degree value of 1 indicates perfect matching, the matching degree value of-1 indicates poor matching, and the matching degree value of 0 indicates no correlation. Therefore, the obtained matching degree value R (x, y) can be directly used as the similarity value of the elastic frame template image and the window area, the difference value between the matching degree value and 1 is not needed to be calculated or other inverse correlation operations are not needed, and the calculated amount can be saved.
Sixth, a second standard-dependent matching algorithm (cv_tm_ccorr_nonhjd 1): and calculating the matching degree value R (x, y) of the elastic frame template image and the window area through the following formula.
Wherein T ' (x ', y ') =t (x ', y ') -1/(w·h) ·Σtherein x",y" T(x",y");
I′(x+x′,y+y′)=I(x+x′,y+y′)-1/(w·h)·∑ x",y" I(x+x",y+y")。
The larger the matching degree value obtained by the second standard correlation matching algorithm is, the higher the matching degree between the elastic frame template image and the window area is, the matching degree value is 1, the perfect matching is represented, the matching degree value is-1, the bad matching is represented, and the matching degree value is 0, and no correlation exists. Therefore, the obtained matching degree value R (x, y) can be directly used as the similarity value of the bullet frame template image and the window area.
In the program, template matching may be performed by calling an interface function cv2.matchtemplate (src, tmpl, method) provided in openCV, where method represents a selected matching method, and may be any of the above-mentioned matching methods.
Step S2023, judging whether the maximum similarity value meets the set confidence level; if so, step S2024 is performed; if not, step S2025 is performed.
Illustratively, the confidence of the setting may be 0.85 or 0.95. If the obtained maximum similarity value is greater than or equal to the set confidence coefficient, the window area corresponding to the maximum similarity value can be determined to be matched with the elastic frame template image. If the obtained maximum similarity value is smaller than the set confidence coefficient, the window area corresponding to the maximum similarity value is not matched with the bullet frame template image.
Step S2024 determines that the window area corresponding to the maximum similarity value matches the frame template image.
Step S2025 determines that the bullet frame template image is not matched in the game interface image.
For example, the maximum similarity value of the template a shown in fig. 7 and the window area in the game interface image is 0.68, it may be determined that the template a is not matched in the game interface image, the maximum similarity value of the template b shown in fig. 8 and the window area in the game interface image is 0.98, and it may be determined that the window area corresponding to the maximum similarity value of 0.98 is matched with the template b.
In one embodiment, the template image in the spring frame template library may be a spring frame template image as template a in fig. 7 or template b in fig. 8. If the template image in the bullet frame module library is a bullet frame template image, the position of the operation key in a window area matched with the bullet frame template image can be determined according to the target area of the operation key in the bullet frame template image, and the corresponding operation area of the operation key in the game interface image can be determined according to the position of the window area in the game interface image and the position of the operation key in the window area, as shown in fig. 8.
If the frame template image includes a plurality of operation keys, for example, the template a in fig. 7 may designate one operation key in advance as an operation key that can be preferentially selected.
In another embodiment, the template image in the spring frame template library may be a key template image of template a as in fig. 9, and the key template image may be an image of a key region truncated from the spring frame template image. If the template image in the bullet frame module library is a key template image and the key template image obtained from the bullet frame template library is matched in the game interface image, the central position of the key template image can be used as a target area where the operation key is located, and an operation area corresponding to the operation key of the bullet frame is determined in the game interface image according to the target area where the operation key is located.
If the game interface image includes a plurality of target areas matched with different key template images in the bullet frame template library, for example, in the game interface image shown in fig. 5, an "off" key and an "on" key are included, and if templates of the two keys are in the bullet frame template library, two different key template images can be matched in the game interface image. According to the click priority of the preset key template image, the center position of the window area matched with the key template image with the highest click priority is used as the operation area corresponding to the operation key.
In one embodiment, some of the frames may not match the corresponding frame template image in the frame template library, considering that the frame template image collected in the frame template library may not be comprehensive. In order to further improve the recognition rate of the operation keys, if all the bullet frame template images in each bullet frame template library are not matched with the corresponding window areas in the game interface image, the operation areas corresponding to the operation keys of the bullet frames can be determined in the game interface image through the trained key recognition model. The trained key recognition model is obtained by training a basic model by taking a training image containing keys as input and taking key labels marked in the training image as output.
In an alternative embodiment, the key recognition model may include a feature extraction network and a regression network. Inputting the acquired game interface image into a feature extraction network, carrying out feature extraction on the game interface image through the feature extraction network to obtain a feature image of the game interface image, inputting the obtained feature image into a regression network, and obtaining the key positions of the operation keys contained in the game interface image output by the regression network. And determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network.
In another alternative embodiment, the key recognition model may include a feature extraction network, a classification network, and a regression network. Inputting the acquired game interface image into a feature extraction network, and extracting features of the game interface image through the feature extraction network to obtain a feature map of the game interface image. And respectively inputting the obtained feature map into a classification network and a regression network to obtain key positions of operation keys contained in the game interface image output by the regression network and key types corresponding to the operation keys at each key position output by the classification network. And determining an operation area corresponding to the operation key of the popup frame according to the key position output by the regression network and the key type corresponding to the key position. Specifically, if the regression network outputs a plurality of key positions, according to the click priority of the preset key types, selecting the key type with the highest click priority from the key types output by the classification network, and taking the key position corresponding to the operation key of the key type with the highest click priority as the operation area.
Fig. 11 is a flowchart of a game control method according to another embodiment of the present application, which may be performed by the data processing server 12 in fig. 1, or by a terminal device or other electronic devices. The steps that are the same as those of the above embodiment are not described in detail in this embodiment. As shown in fig. 11, the game control method includes the steps of:
Step S1101, when it is determined that the game interface is blocked by the bullet frame, a game interface image is acquired.
Step S1102, an elastic frame template image is obtained from an elastic frame template library, and the obtained elastic frame template image is matched with a game interface image.
The number of the bullet frame template libraries can be one or a plurality of. For example, the bullet frame template library may only include game bullet frames, i.e., the bullet frame template library is composed of only game bullet frame template libraries; the bullet frame template library can also comprise two template libraries, namely a game bullet frame template library and a device bullet frame template library. The game cartridge and the device cartridge may also be stored in a cartridge template library.
Step S1103, judging whether the image of the game interface is matched with the image of the bullet frame template in the bullet frame template library; if yes, go to step S1104; if not, step S1105 is performed.
Step S1104, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to the target area where the operation key in the matched bullet frame template image is located.
Step S1105, determining an operation area corresponding to the operation key of the bullet frame in the game interface image through the trained key recognition model.
In view of the fact that the shot template images collected in the shot template library may not be comprehensive, the shot ejected in the game may not be collected in the shot template library. For example, during a christmas activity, the temporarily pushed activity for christmas, because it is a newly pushed activity, may not be found in the bullet box template library; or, in the game running process, popping up message prompt boxes of other application programs, wherein the pop-up boxes can not be found in the pop-up box template library. For the above situation, a deep learning model may be applied to predict the operation keys and the positions thereof included in the game interface, that is, the operation region corresponding to the operation keys of the bullet frame is determined in the game interface image through the trained key recognition model.
In an alternative embodiment, the key recognition model may include a feature extraction network and a regression network. Inputting the acquired game interface image into a feature extraction network, carrying out feature extraction on the game interface image through the feature extraction network to obtain a feature image of the game interface image, inputting the obtained feature image into a regression network, and obtaining the key positions of the operation keys contained in the game interface image output by the regression network. And determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network.
In another alternative embodiment, as shown in FIG. 12, the key recognition model may include a feature extraction network, a classification network, and a regression network. Inputting the acquired game interface image into a feature extraction network, and extracting features of the game interface image through the feature extraction network to obtain a feature map of the game interface image. And respectively inputting the obtained feature map into a classification network and a regression network to obtain key positions of operation keys contained in the game interface image output by the regression network and key types corresponding to the operation keys at each key position output by the classification network. And determining an operation area corresponding to the operation key of the popup frame according to the key position output by the regression network and the key type corresponding to the key position.
Wherein the feature extraction network may comprise a plurality of convolution layers for feature extraction of the image. Illustratively, the feature extraction network may employ a backbone (back bone) network, an input of which is a game interface image, and an output of which is a feature map of the game interface image.
The classification network may include a plurality of convolutional layers, primarily for classifying the operational keys. The feature map of the game interface image is input into the classification network, the classification network can judge whether the input feature map contains operation keys, and the possibility that the input feature map contains the operation keys, namely the possibility that the operation keys appear in the game interface image is output.
The regression network also includes a plurality of convolution layers, which are mainly used for positioning the operation keys, and the target positioning task can be considered as a regression task. The feature map is input into a regression network, and the regression network can determine the position of the operation key in the input feature map, namely the position of the operation key in the game interface image. The regression network may output a rectangular bounding box indicating the location of the operating key.
If the regression network outputs a plurality of key positions, selecting the key type with the highest click priority from the key types output by the classification network according to the click priority of the preset key types, and taking the key position corresponding to the operation key of the key type with the highest click priority as an operation area.
For example, in one embodiment, it is assumed that the key recognition model can recognize three keys, namely "cancel", "return", and "close", where the "cancel" key has the highest click priority and the "return" key has the lowest click priority. If the key identification model identifies that a certain game interface image contains both a cancel key and a close key, the two keys respectively correspond to different key positions, and the key position corresponding to the cancel key with higher click priority is used as an operation area according to the click priority of the key type.
Illustratively, the key recognition model may employ a main network model of algorithm YOLO, as shown in fig. 13, including a plurality of convolution layers Convolitional, max-pooling layer Avgpool, connection layer Connected, and classifier Softmax. The connection layer outputs key positions of operation keys contained in the game interface image, and the classifier outputs key types corresponding to each key position. The key recognition model may also employ a network model of refindet.
Step S1106, corresponding operation is executed on the bullet frame in the game interface according to the determined operation area.
In this embodiment, the game interface image may be matched with the frame template image in the frame template library, and when a frame that is not stored in the frame template library appears in the game interface, an operation area corresponding to an operation key of the frame may be determined in the game interface image through the trained key recognition model. Through the trained key recognition model, the game interface can realize wider recognition, more recognize the probability of the bullet frame in the game interface, and further reduce the occurrence of the phenomenon that the game is blocked by the bullet frame.
The training process of the key recognition model used in the above embodiment, as shown in fig. 14, includes the following steps:
in step S1401, a training image including a key is extracted from the training sample set.
Wherein, training image marks with key label.
The training images are collected to construct a training sample set, wherein the training sample set comprises a plurality of training images, and the training images can be images containing game bullets, images containing equipment bullets or UI interface images containing other bullets.
After the training images are collected, the training images are marked or key labels are set. In one embodiment, the key label may include a key position for each operating key. In another embodiment, the key label may include a key type and key position for each operational key. For example, for a training image as shown in fig. 15, the key types and key positions corresponding to the operation keys "off" and "cancel" may be marked in the training image, respectively. Although the "ok" key is also included in fig. 15, since there is no need to exit the game during the game play, there is no need to click on the "ok" key, i.e., the "ok" key need not be labeled. This applies to the scenario like fig. 15, and does not exclude that in other scenarios the "ok" key needs to be clicked, and hence the key type and key position of the "ok" key also need to be noted.
In some embodiments, the collected training images may be marked with "cancel", "return", "close" key types and key positions in the training images, respectively, corresponding to the three keys. The trained key identification model can identify whether any one of the three keys of cancel, return and close is contained in the game interface image, and the position of the key is predicted.
Step S1402, the extracted training image is input into the key recognition model to obtain the key recognition result of the training image.
In an alternative embodiment, if the key recognition model includes a feature extraction network and a regression network. Inputting the training image into a feature extraction network, extracting features of the training image through the feature extraction network to obtain a feature map of the training image, inputting the feature map of the training image into a regression network to obtain key positions of operation keys contained in the training image output by the regression network, wherein the key positions of the operation keys are key recognition results of the training image.
In another alternative embodiment, if the key recognition model includes a feature extraction network, a classification network, and a regression network. Inputting the training image into a feature extraction network, and extracting features of the training image through the feature extraction network to obtain a feature map of the training image. And respectively inputting the feature images of the training images into a classification network and a regression network to obtain key positions of operation keys contained in the training images output by the regression network and key types corresponding to the operation keys at each key position output by the classification network. The key positions of the operation keys and the key types corresponding to the operation keys on each key position are key identification results of the training images.
Step S1403, determining a loss value according to the recognition result of the training image and the key label of the key recognition.
In an alternative embodiment, if the key recognition result of the training image includes a key position of the operation key, the loss value may be determined according to a matching degree of the key position in the key recognition result and the key position in the key label.
In an alternative embodiment, if the key recognition result of the training image includes key positions of the operation keys and key types corresponding to the operation keys at each key position, the first loss value may be determined according to a matching degree of the key positions in the key recognition result and the key positions in the key label, and the second loss value may be determined according to a matching degree of the key types in the key recognition result and the key types in the key label. And taking the weighted sum of the first loss value and the second loss value as a finally determined loss value.
In the above method, when calculating the loss value, a preset loss function may be used to calculate the loss value, and the loss function may be a cross entropy loss function, for example, sigmoid function. Typically, the loss value is a measure that determines how close the actual output is to the desired output. The smaller the loss value, the closer the actual output is to the desired output.
Step S1404, judging whether the loss value is converged; if yes, go to step S1406; if not, step S1405 is performed.
Judging whether the loss value is converged to a preset expected value or not, and if the loss value is smaller than or equal to the preset expected value, or if the variation amplitude of the loss value obtained by continuous N times of training is smaller than or equal to the preset expected value, considering that the loss value is converged to the preset expected value, and indicating that the loss value is converged; otherwise, it is indicated that the loss value has not converged.
In step S1405, parameters of the key recognition model are adjusted according to the loss value.
If the loss value has not converged, a back propagation algorithm may be adopted to adjust parameters of the key identification model according to the loss value, and then step S1401 is returned to continue extracting training images to train the key identification model.
Step S1406, taking the current parameter as the parameter of the key recognition model, to obtain the trained key recognition model.
After training is completed, a key recognition model obtained through training can be loaded, and the position of an operation key of an unknown bullet frame in the game interface image is predicted through the key recognition model.
For easier understanding, fig. 16 shows a specific implementation manner of the game control method provided in the embodiment of the present application, and as shown in fig. 16, the method includes the following steps:
Step S1601, when it is determined that the game interface is blocked by the bullet frame, a game interface image is acquired.
Step S1602, obtaining game bullet frame template images one by one from a game bullet frame template library with higher priority.
Step S1603, determining whether the acquired game cartridge template image is matched in the game interface image; if yes, go to step S1607; if not, step S1604 is performed.
Step S1604, judging whether game bullet frame template images which do not participate in matching exist in a game bullet frame template library; if yes, return to execute step S1602; if not, step S1605 is performed.
Step S1605, acquiring the device elastic frame template images from the device elastic frame template library with lower priority one by one.
Considering that there are relatively many bullet frames that may be ejected in a game, if all of the bullet frame template images are placed in one bullet frame template library, the matching is sequentially performed, which is relatively time-consuming. The embodiment classifies the bullet frames into game bullet frames and device bullet frames according to types. Game cartridges associated with a game are cartridges that occur relatively frequently in the game, while device cartridges associated with a device occur less frequently in the game. Therefore, the game cartridge has a higher priority than the device cartridge, and when encountering a cartridge in the game, the game cartridge template image in the game cartridge template library can be selected to be matched first, because the probability of occurrence of the game cartridge is high at this time. And under the condition that the game bullet frame template library cannot be matched with the bullet frames in the game interface image, searching in the equipment bullet frame template library.
That is, if there is no game cartridge template image in the game cartridge template library that does not participate in the matching, that is, all game cartridge template images in the game cartridge template library have been matched with the game interface image and none of them has been matched in the game interface image, it is indicated that there is no cartridge in the game cartridge template library that has been matched with the game interface image, it is possible to search for a cartridge in the game interface image in the device cartridge template library.
Step S1606, determining whether the acquired device bullet frame template image is matched in the game interface image; if yes, go to step S1607; if not, step S1608 is performed.
Step S1607, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to the target area where the operation key of the matched bullet frame template image is located.
If the game frame template library or the equipment frame template library is matched with the frame in the game interface image, determining an operation area corresponding to the operation key of the frame in the game interface image according to the target area where the operation key in the matched frame template image is located.
Step S1608, judging whether equipment elastic frame template images which do not participate in matching exist in the equipment elastic frame template library; if yes, go back to execute step S1605; if not, step S1609 is performed.
In step S1609, an operation area corresponding to the operation key of the bullet frame is determined in the game interface image through the trained key recognition model.
If the equipment elastic frame template library does not have the equipment elastic frame template images which do not participate in matching, namely all the equipment elastic frame template images in the equipment elastic frame template library are matched with the game interface image and are not matched in the game interface image, the operation area corresponding to the operation keys of the elastic frame can be determined in the game interface image through the trained key recognition model.
Step S1610, according to the determined operation area, executing corresponding operation on the bullet frame in the game interface.
Consider that an unknown bullet appears newly in the game. The sabot may never have been present before. At this time, the status of the game jam cannot be solved by matching the bullet frame template images in the bullet frame template library. The embodiment combines the deep learning model to predict the key type and key position of the operation key contained in the game interface image, and if the key of the type of return, closing and cancel is detected in the game interface image, the corresponding position of the key is clicked, so that the problem of game jam is solved.
The embodiment provides a game control method, if a game interface is blocked by a bullet frame, firstly matching a game interface image with a game bullet frame template library, if the bullet frame in the current game interface is not in the range of the game bullet frame template library, judging whether the bullet frame in the current game interface is in an equipment bullet frame template library by an image detection method. If the bullet frame in the current game interface is not found in the equipment bullet frame template library, a deep learning network model is adopted to predict the key type and the key position of the operation key contained in the current game interface, and the position which should be clicked in the current game interface image is found by an image recognition method. The method can relatively effectively detect the bullet frame in the current game interface and reduce the phenomenon that the game interface is blocked by the bullet frame. The method can be applied to game automation test, effectively assist game automation test, or other researches based on game UI, such as game AI and the like.
It should be noted that, the foregoing method embodiments are described in a progressive manner, and each embodiment focuses on the differences from the other embodiments, and identical and similar parts between the embodiments are only required to be mutually referred to.
Corresponding to the embodiments of the game control method described above, the embodiments of the present application also provide a game control device. FIG. 17 is a schematic diagram of a game control device according to an embodiment of the present disclosure; as shown in fig. 17, the game control device includes an image acquisition unit 171, an image matching unit 172, an operation area determination unit 173, and an operation execution unit 174.
In some embodiments, the image acquisition unit 171 is configured to acquire a game interface image when it is determined that the game interface is jammed by the bullet frame;
the image matching unit 172 is configured to preferentially obtain a frame template image from the high-priority frame template library according to the priority order of the frame template library, and match the obtained frame template image with the game interface image; wherein: the priority of the bullet frame template library is positively correlated with the pop-up probability of the bullet frames corresponding to the bullet frame template images in the bullet frame template library in the game;
an operation area determining unit 173, configured to determine, in the game interface image, an operation area corresponding to an operation key of the bullet frame according to a target area where the operation key in the matched bullet frame template image is located if the bullet frame template image obtained from the bullet frame template library is matched in the game interface image;
And the operation execution unit 174 is configured to execute a corresponding operation on the bullet frame in the game interface according to the determined operation area.
In an alternative embodiment, the bullet frame template libraries are divided according to the types of the bullet frames, the bullet frame types corresponding to the bullet frame template images included in each bullet frame template library are the same, the bullet frames corresponding to the bullet frame types of the high-priority bullet frame template library have higher pop-up probability in the game than the bullet frames corresponding to the bullet frame types of the low-priority bullet frame template library.
In an alternative embodiment, the types of the bullet frames at least comprise game bullet frames and equipment bullet frames, the bullet frame template library at least comprises a game bullet frame template library and an equipment bullet frame template library, and the priority of the game bullet frame template library is higher than that of the equipment bullet frame template library.
In an alternative embodiment, the image matching unit 172 may specifically be configured to:
traversing the game interface image by adopting a sliding window according to a set step length to obtain a plurality of window areas; the size of the sliding window is the same as that of the acquired elastic frame template image;
determining the maximum similarity value in the similarity values of the acquired elastic frame template image and the window areas;
when the maximum similarity value meets the set confidence coefficient, determining that a window area corresponding to the maximum similarity value is matched with the acquired elastic frame template image.
In an alternative embodiment, the image matching unit 172 may specifically be configured to:
determining a similarity value of the acquired elastic frame template image and each window area;
storing the similarity value of the acquired elastic frame template image and each window area as a similarity matrix according to the position of each window area in the game interface image;
the maximum similarity value is selected from the similarity matrix.
In an alternative embodiment, the operation area determining unit 173 may be further configured to:
if all the bullet frame template images in each bullet frame template library are not matched in the game interface image, determining an operation area corresponding to the operation keys of the bullet frame in the game interface image through the trained key identification model.
In an alternative embodiment, the key recognition model includes a feature extraction network and a regression network; the operation area determining unit 173 may be further configured to:
extracting features of the game interface image through a feature extraction network to obtain a feature map of the game interface image;
inputting the obtained feature map into a regression network to obtain key positions of operation keys contained in a game interface image output by the regression network;
and determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network.
In an alternative embodiment, the key recognition model further comprises a classification network; after obtaining the feature map of the game interface image, the operation area determining unit 173 may be further configured to:
inputting the obtained feature map into a classification network to obtain a key type corresponding to an operation key at each key position output by the classification network;
determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network, including:
if the regression network outputs a plurality of key positions, selecting a key type with the highest click priority from key types output by the classification network according to the click priority of the preset key types;
and taking the key position corresponding to the operation key of the key type with the highest click priority as an operation area.
In an alternative embodiment, the image acquisition unit 171 may also be configured to:
setting time length at each interval, and acquiring a frame of game interface image;
comparing the current frame of game interface image with the previous frame of game interface image;
if the similarity between the current frame of game interface image and the previous frame of game interface image reaches a set threshold value, determining that the game interface is blocked by the flick frame.
In an alternative embodiment, as shown in fig. 18, the game control device may further include a model training unit 181 for:
Extracting training images containing keys from a training sample set; the training image is marked with a key label;
inputting the extracted training image into a key recognition model to obtain a key recognition result of the training image;
determining a loss value according to the recognition result of the training image and the key label of key recognition;
and adjusting parameters of the key identification model according to the loss value until the loss value converges to a preset expected value, so as to obtain the trained key identification model.
In another embodiment, the image acquisition unit 171 is configured to acquire a game interface image when it is determined that the game interface is jammed by the bullet frame;
the image matching unit 172 is configured to obtain a frame template image from the frame template library, and match the obtained frame template image with a game interface image;
an operation area determining unit 173, configured to determine, in the game interface image, an operation area corresponding to an operation key of the bullet frame according to a target area where the operation key in the matched bullet frame template image is located if the bullet frame template image obtained from the bullet frame template library is matched in the game interface image; if all the bullet frame template images in the bullet frame template library are not matched in the game interface image, determining an operation area corresponding to the operation keys of the bullet frame in the game interface image through the trained key identification model;
And the operation execution unit 174 is configured to execute a corresponding operation on the bullet frame in the game interface according to the determined operation area.
Corresponding to the method embodiment, the embodiment of the application also provides electronic equipment. The electronic device may be a server, such as the data processing server 12 shown in fig. 1, or a terminal device, such as a mobile terminal or a computer, comprising at least a memory for storing data and a processor for data processing. Among them, for a processor for data processing, when performing processing, a microprocessor, a CPU, a GPU (Graphics Processing Unit, a graphics processing unit), a DSP, or an FPGA may be employed. For the memory, operation instructions, which may be computer-executable codes, are stored in the memory, and each step in the flow of the game control method of the embodiment of the present application is implemented by the operation instructions.
Fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application; as shown in fig. 19, the electronic device 190 in the embodiment of the present application includes: a processor 191, a display 192, a memory 193, an input device 196, a bus 195, and a communication module 194; the processor 191, memory 193, input device 196, display 192, and communication module 194 are all connected by a bus 195, the bus 195 being used to transfer data between the processor 191, memory 193, display 192, communication module 194, and input device 196.
The memory 193 may be used to store software programs and modules, such as program instructions/modules corresponding to the game control method in the embodiments of the present application, and the processor 191 executes the software programs and modules stored in the memory 193 to perform various functional applications and data processing of the electronic device 190, such as the game control method provided in the embodiments of the present application. The memory 193 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program of at least one application, and the like; the stored data area may store data created from the use of the electronic device 190 (e.g., related data such as game interface images, a bullet box template library, and a trained key recognition model), etc. In addition, memory 193 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 191 is a control center of the electronic device 190, utilizes the bus 195 and various interfaces and lines to connect the various parts of the overall electronic device 190, performs various functions of the electronic device 190 and processes data by running or executing software programs and/or modules stored in the memory 193, and invoking data stored in the memory 193. Optionally, the processor 191 may include one or more processing units, such as a CPU, GPU (Graphics Processing Unit ), digital processing unit, or the like.
In the embodiment of the present application, during the game test or the automatic running process, the processor 191 displays the game interface to the user through the display 192, and the user can also intuitively see the processing result of the frame in the game interface through the display 192, for example, can see the action of clicking the operation key in the frame by the mouse controlled by the processor 191.
The processor 191 may also be connected to a network through the communication module 194 to obtain an installation package of the game to be tested, etc.
The input device 196 is mainly used to obtain input operations by a user, and when the electronic devices are different, the input device 196 may also be different. For example, when the electronic device is a computer, the input device 196 may be an input device such as a mouse, keyboard, or the like; when the electronic device is a portable device such as a smart phone or a tablet computer, the input device 196 may be a touch screen.
The embodiments of the present application also provide a computer storage medium having stored therein computer-executable instructions for implementing the game control method described in any of the embodiments of the present application.
In some possible embodiments, various aspects of the game control method provided herein may also be implemented in the form of a program product comprising program code for causing a computer device to perform the steps of the game control method according to the various exemplary embodiments herein described above when the program product is run on the computer device, for example, the computer device may perform the steps S201 to S204 shown in fig. 2 or the flows of the game control method of steps S1101 to S1106 shown in fig. 11.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (12)

1. A game control method, comprising:
when the game interface is determined to be blocked by the bullet frame, a game interface image is acquired;
According to the priority order of the bullet frame template library, bullet frame template images are obtained from the bullet frame template library, and the obtained bullet frame template images are matched with the game interface images;
if the game interface image is matched with the bullet frame template image obtained from the bullet frame template library, determining an operation area corresponding to the operation key of the bullet frame in the game interface image according to a target area where the operation key in the matched bullet frame template image is located;
if all the bullet frame template images in each bullet frame template library are not matched in the game interface image, determining an operation area corresponding to the operation key of the bullet frame in the game interface image through a trained key identification model; the key identification model comprises a feature extraction network and a regression network; the trained key recognition model is obtained by training a basic model by taking a training image containing keys as input and taking key labels marked in the training image as output;
and executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
2. The method of claim 1, wherein the number of the bullet frame template libraries is at least two, each bullet frame template library is divided according to the types of bullet frames, the types of bullet frames corresponding to the bullet frame template images included in each bullet frame template library are the same, the bullet frames corresponding to the bullet frame types of the high-priority bullet frame template library have higher pop-up probability in the game than the bullet frames corresponding to the bullet frame types of the low-priority bullet frame template library.
3. The method of claim 2, wherein the types of the frames include at least game frames and device frames, the frame template library includes at least a game frame template library and a device frame template library, and the game frame template library has a higher priority than the device frame template library.
4. The method according to claim 1, wherein the matching the acquired frame template image with the game interface image specifically comprises:
traversing the game interface image by adopting a sliding window according to a set step length to obtain a plurality of window areas; the sliding window is the same as the acquired elastic frame template image in size;
determining the maximum similarity value in the similarity values of the acquired elastic frame template image and the window areas;
and when the maximum similarity value meets the set confidence coefficient, determining that a window area corresponding to the maximum similarity value is matched with the acquired elastic frame template image.
5. The method of claim 4, wherein determining a maximum similarity value of similarity values of the acquired frame template image and the plurality of window regions comprises:
determining a similarity value of the acquired elastic frame template image and each window area;
Storing the similarity value of the acquired elastic frame template image and each window area as a similarity matrix according to the position of each window area in the game interface image;
and selecting the maximum similarity value from the similarity matrix.
6. The method of claim 1, wherein determining an operation region corresponding to an operation key of a bullet frame in the game interface image through a trained key recognition model comprises:
extracting features of the game interface image through the feature extraction network to obtain a feature map of the game interface image;
inputting the obtained feature map into the regression network to obtain key positions of operation keys contained in the game interface image output by the regression network;
and determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network.
7. The method of claim 6, wherein the key recognition model further comprises a classification network; after obtaining the feature map of the game interface image, determining an operation area corresponding to the operation key of the bullet frame in the game interface image through the trained key recognition model, and further comprising:
Inputting the obtained feature map into the classification network to obtain the key type corresponding to the operation key at each key position output by the classification network;
determining an operation area corresponding to the operation key of the bullet frame according to the key position output by the regression network, wherein the operation area comprises:
if the regression network outputs a plurality of key positions, selecting a key type with the highest click priority from the key types output by the classification network according to the click priority of the preset key types;
and taking the key position corresponding to the operation key of the key type with the highest click priority as an operation area.
8. The method of any one of claims 1-7, wherein determining that a game interface is jammed by a bezel comprises:
setting time length at each interval, and acquiring a frame of game interface image;
comparing the current frame of game interface image with the previous frame of game interface image;
if the similarity between the current frame of game interface image and the previous frame of game interface image reaches a set threshold value, determining that the game interface is blocked by the flick frame.
9. The method according to any one of claims 1 to 7, wherein the training process of the key recognition model comprises:
Extracting training images containing keys from a training sample set; the training image is marked with a key label;
inputting the extracted training image into a key recognition model to obtain a key recognition result of the training image;
determining a loss value according to the recognition result of the training image and the key label of key recognition;
and adjusting the parameters of the key identification model according to the loss value until the loss value converges to a preset expected value, so as to obtain the trained key identification model.
10. A game control device, comprising:
the image acquisition unit is used for acquiring a game interface image when determining that the game interface is blocked by the bullet frame;
the image matching unit is used for acquiring a bullet frame template image from the bullet frame template library according to the priority order of the bullet frame template library and matching the acquired bullet frame template image with the game interface image;
an operation area determining unit, configured to determine, in the game interface image, an operation area corresponding to an operation key of the bullet frame according to a target area where the operation key in the matched bullet frame template image is located if the bullet frame template image obtained from the bullet frame template library is matched in the game interface image; if all the bullet frame template images in the bullet frame template library are not matched in the game interface image, determining an operation area corresponding to an operation key of the bullet frame in the game interface image through a trained key identification model; the key identification model comprises a feature extraction network and a regression network; the trained key recognition model is obtained by training a basic model by taking a training image containing keys as input and taking key labels marked in the training image as output;
And the operation execution unit is used for executing corresponding operation on the bullet frame in the game interface according to the determined operation area.
11. A computer-readable storage medium having a computer program stored therein, characterized in that: the computer program, when executed by a processor, implements the method of any of claims 1-9.
12. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, the computer program, when executed by the processor, implementing the method of any of claims 1-9.
CN202010188645.8A 2020-03-17 2020-03-17 Game control method, game control device, storage medium and electronic equipment Active CN112742026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188645.8A CN112742026B (en) 2020-03-17 2020-03-17 Game control method, game control device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188645.8A CN112742026B (en) 2020-03-17 2020-03-17 Game control method, game control device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112742026A CN112742026A (en) 2021-05-04
CN112742026B true CN112742026B (en) 2023-07-28

Family

ID=75645274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188645.8A Active CN112742026B (en) 2020-03-17 2020-03-17 Game control method, game control device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112742026B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8317606B2 (en) * 2008-12-04 2012-11-27 Disney Enterprises, Inc. Live authoring method for real time development of video games
CN105786687B (en) * 2014-12-22 2019-01-08 博雅网络游戏开发(深圳)有限公司 Mobile application test method and device
CN108628657A (en) * 2018-05-09 2018-10-09 深圳壹账通智能科技有限公司 Pop-up processing method, device, computer equipment and storage medium
CN110633196A (en) * 2018-06-21 2019-12-31 亿度慧达教育科技(北京)有限公司 Automatic use case execution method and device of application program
CN109901996B (en) * 2019-01-25 2022-10-18 北京三快在线科技有限公司 Auxiliary test method and device, electronic equipment and readable storage medium
CN110457214B (en) * 2019-07-30 2023-10-13 腾讯科技(深圳)有限公司 Application testing method and device and electronic equipment
CN110751013A (en) * 2019-08-19 2020-02-04 腾讯科技(深圳)有限公司 Scene recognition method, device and computer-readable storage medium

Also Published As

Publication number Publication date
CN112742026A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN111523621B (en) Image recognition method and device, computer equipment and storage medium
US11461537B2 (en) Systems and methods of data augmentation for pre-trained embeddings
US11164565B2 (en) Unsupervised learning system and method for performing weighting for improvement in speech recognition performance and recording medium for performing the method
US20200410338A1 (en) Multimodal data learning method and device
US11249645B2 (en) Application management method, storage medium, and electronic apparatus
US11966831B2 (en) Feedback mechanisms in sequence learning systems with temporal processing capability
US20200101383A1 (en) Method and apparatus for recognizing game command
CN112287994A (en) Pseudo label processing method, device, equipment and computer readable storage medium
CN113361593B (en) Method for generating image classification model, road side equipment and cloud control platform
KR102293791B1 (en) Electronic device, method, and computer readable medium for simulation of semiconductor device
US20220254006A1 (en) Artificial intelligence server
CN111753895A (en) Data processing method, device and storage medium
CN113190670A (en) Information display method and system based on big data platform
CN112819024B (en) Model processing method, user data processing method and device and computer equipment
CN111222557A (en) Image classification method and device, storage medium and electronic equipment
KR20200080418A (en) Terminla and operating method thereof
US20200202068A1 (en) Computing apparatus and information input method of the computing apparatus
CN112115996B (en) Image data processing method, device, equipment and storage medium
CN112742026B (en) Game control method, game control device, storage medium and electronic equipment
KR102413588B1 (en) Object recognition model recommendation method, system and computer program according to training data
CN117523218A (en) Label generation, training of image classification model and image classification method and device
CN110334244B (en) Data processing method and device and electronic equipment
CN111860556A (en) Model processing method and device and storage medium
CN112347893B (en) Model training method and device for video behavior recognition and computer equipment
CN117540791B (en) Method and device for countermeasure training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043503

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant