AU2021240274A1 - Methods, apparatuses, devices, systems and storage media for detecting game items - Google Patents

Methods, apparatuses, devices, systems and storage media for detecting game items Download PDF

Info

Publication number
AU2021240274A1
AU2021240274A1 AU2021240274A AU2021240274A AU2021240274A1 AU 2021240274 A1 AU2021240274 A1 AU 2021240274A1 AU 2021240274 A AU2021240274 A AU 2021240274A AU 2021240274 A AU2021240274 A AU 2021240274A AU 2021240274 A1 AU2021240274 A1 AU 2021240274A1
Authority
AU
Australia
Prior art keywords
game
game item
operation object
position information
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2021240274A
Inventor
Xinxin Wang
Fei Xie
Naiwei XIE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime International Pte Ltd
Original Assignee
Sensetime International Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime International Pte Ltd filed Critical Sensetime International Pte Ltd
Priority claimed from PCT/IB2021/058727 external-priority patent/WO2023037157A1/en
Publication of AU2021240274A1 publication Critical patent/AU2021240274A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F1/00Card games
    • A63F1/06Card games appurtenances
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/20Dominoes or like games; Mah-Jongg games
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3202Hardware aspects of a gaming system, e.g. components, construction, architecture thereof
    • G07F17/3216Construction aspects of a gaming system, e.g. housing, seats, ergonomic aspects
    • G07F17/322Casino tables, e.g. tables having integrated screens, chip detection means
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3225Data transfer within a gaming system, e.g. data sent between gaming machines and users
    • G07F17/3232Data transfer within a gaming system, e.g. data sent between gaming machines and users wherein the operator is informed
    • G07F17/3237Data transfer within a gaming system, e.g. data sent between gaming machines and users wherein the operator is informed about the players, e.g. profiling, responsible gaming, strategy/behavior of players, location of players
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3241Security aspects of a gaming system, e.g. detecting cheating, device integrity, surveillance
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3286Type of games
    • G07F17/3293Card games, e.g. poker, canasta, black jack
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/20Dominoes or like games; Mah-Jongg games
    • A63F2009/205Mah-jongg games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to the field of computer vision technology, and in particular to a method, an apparatus, a device and a storage medium for detecting a game item. The method of detecting a game item includes: detecting, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area and second position information of an operation object for the game item; determining, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state. The detection method of the present disclosure can reduces the interference of motion or being sheltered on the recognition for the game item, which is beneficial to improve the accuracy of subsequent recognition task(s).

Description

METHODS, APPARATUSES, DEVICES, SYSTEMS AND STORAGE MEDIA FOR DETECTING GAME ITEMS CROSS-REFERENCE TO RELATED APPLICATION The present disclosure claims priority to Singapore Patent Application No. 10202110070U, filed on September 13, 2021, all of which are incorporated herein by reference in their entirety. TECHNICAL FIELD
[01] The present disclosure relates to the field of computer vision technology, in particular to a method, an apparatus, a device, a system and a storage medium for detecting a game item. BACKGROUND
[02] Currently, scene action detection based on computer vision is widely used in various scenes. For scenes such as tabletop games, a vision detection system is often used to detect and recognize information of game props. SUMMARY
[03] In a first aspect, an embodiment of the present disclosure provides a method of detecting a game item, which includes detecting, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area and second position information of an operation object for the game item; and determining, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state.
[04] In some embodiments, detecting the first position information of the game item within the game area and the second position information of the operation object for the game item includes: obtaining a first boundary box of the game item within the game area and a second boundary box of the operation object by performing a detection on the video stream.
[05] In some embodiments, the method provided in the present disclosure further includes at least one of determining, in response to that there is no overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is separating from each other; or determining, in response to that there is an overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is overlapping with each other.
[06] In some embodiments, determining, in response to determining that the positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in the operation completed state includes: for each game item involved in the video stream of the game area, determining that the game item is in the operation completed state at a time corresponding to a first video frame in the video stream, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream.
[07] In some embodiments, determining that the game item is in the operation completed state at the time corresponding to the first video frame in the video stream, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in the preceding frame of the first video frame in the video stream includes: obtaining third position information of the game item in the preceding frame, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in the preceding frame of the first video frame in the video stream; and determining, in response to that the first position information and the third position information meet a predetermined position condition, that the game item is in the operation completed state at the time corresponding to the first video frame.
[08] In some embodiments, detecting the first position information of the game item within the game area and the second position information of the operation object for the game item includes: determining, based on the video stream, whether the operation object for the game item is a target operation object; and detecting, in response to that the operation object for the game item is the target operation object, the first position information of the game item and the second position information of the target operation object.
[09] In some embodiments, determining, based on the video stream, whether the operation object for the game item is the target operation object includes: determining, based on the video stream, a direction of operating the game item by the operation object; and determining, in response to that the direction of operating is a predetermined direction, that the operation object is the target operation object.
[10] In some embodiments, determining, based on the video stream, whether the operation object for the game item is the target operation object includes: detecting, based on the video stream, an operation object correlated with the game item; detecting, based on the video stream, a face object correlated with the operation object; and determining, in response to that the face object is a predetermined face object, that the operation object for the game item is the target operation object.
[11] In some embodiments, the method provided in the present disclosure further includes: obtaining, by performing a detection on each video frame in the video stream sequentially, a current detection result of each game item that is in the operation completed state in the video frame; updating a historical detection result stored in a cache based on a comparison between the current detection result and the historical detection result in the cache; and switching, in a case that one or more detection results in the cache meet a predetermined condition, the game to a result processing state.
[12] In some embodiments, the detection result includes position information of a game item, updating the historical detection result in the cache based on the comparison between the current detection result and the historical detection result in the cache includes: determining, according to a comparison between position information of the game item in the current detection result and each position information in the historical detection result, whether the game item is a newly-appearing game item; and storing, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache.
[13] In some embodiments, updating the historical detection result in the cache based on the comparison between the current detection result and the historical detection result stored in the cache further includes: storing, in response to that there is no historical detection result in the cache, the current detection result into the cache and switching the current game to a game item distributing state.
[14] In some embodiments, storing, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache includes: storing, in response to that the game item is a newly-appearing game item and the current game is in the game item distributing state, the detection result of the newly-appearing game item into the cache.
[15] In some embodiments, the detection result includes identification information of a game item; the determining that the one or more detection results in the cache meet the predetermined condition includes: obtaining, for each of the one or more detection results in the cache, identification information of a game item involved in the detection result in the cache; determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results; and determining, in response to that a game processing result can be derived from the identification information obtained for each of the one or more detection results, that the one or more detection results meet the predetermined condition.
[16] In some embodiments, before determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results, the method further includes: executing, in response to that a number of the one or more detection results stored in the cache reaches a predetermined number, the step of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results.
[17] In some embodiments, the detection result includes position information of a game item, the method further includes: acquiring sequential information of the cache storing each detection result in the cache; determining a target sub-area corresponding to the detection result based on the sequential information and a pre-established correspondence between sequential information of game items and a plurality of sub-areas within the game area; and switching the current game to a halt state, in response to determining that the game item is not in the target sub-area according to the position information of the game item.
[18] In some embodiments, the game area includes a game item operating area, detecting the first position information of the game item within the game area and the second position information of the operation object for the game item includes: detecting, in response to that the game item is moved to the game item operating area, the first position information of the game item and the second position information of the operation object for the game item.
[19] In a second aspect of the present disclosure, an apparatus for detecting a game item is provided which includes: a first detecting module, configured to detect, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area, and second position information of an operation object for the game item; and a first determining module, configured to determine, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state.
[20] In some embodiments, the first detecting module is specifically configured to obtain a first boundary box of the game item within the game area and a second boundary box of the operation object by performing a detection on the video stream.
[21] In some embodiments, the first determining module is specifically configured to: determine, in response to that there is no overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is separating from each other; and/or determine, in response to that there is an overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is overlapping with each other.
[22] In some embodiments, the first determining module is specifically configured to: for each game item involved in the video stream of the game area, determine that the game item is in the operation completed state at a time corresponding to a first video frame in the video stream, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream.
[23] In some embodiments, the first determining module is specifically configured to obtain third position information of the game item in the preceding frame, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream; and determine, in response to that the first position information and the third position information meet a predetermined position condition, that the game item is in the operation completed state at the time corresponding to the first video frame.
[24] In some embodiments, the first detecting module is specifically configured to determine whether the operation object for the game item is a target operation object based on the video stream; and detect, in response to that the operation object for the game item is the target operation object, the first position information of the game item and the second position information of the target operation object.
[25] In some embodiments, the first detecting module is specifically configured to determine, based on the video stream, a direction of operating the game item by the operation object; and determine, in response to that the direction of operating is a predetermined direction, that the operation object is the target operation object.
[26] In some embodiments, the first detecting module is specifically configured to detect an operation object correlated with the game item based on the video stream; detect a face object correlated with the operation object based on the video stream; and determine, in response to that the face object is a predetermined face object, that the operation object for the game item is the target operation object.
[27] In some embodiments, the detection apparatus provided in the present disclosure further includes a second detecting module, which is configured to obtain, by performing a detection on each video frame in the video stream sequentially, a current detection result of each game item that is in the operation completed state in the video frame; a cache updating module, which is configured to update a historical detection result stored in a cache based on a comparison between the current detection result and the historical detection result in the cache; and a game state switching module, which is configured to switch the game to a result processing state in a case that one or more detection results in the cache meet a predetermined condition.
[28] In some embodiments, wherein the detection result includes position information of a game item, and the cache updating module is specifically configured to determine, according to a comparison between position information of the game item in the current detection result and each position information in the historical detection result, whether the game item is a newly-appearing game item; and store, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache.
[29] In some embodiments, the cache updating module is specifically configured to store, in response to that there is no historical detection result in the cache, the current detection result into the cache and switching the current game to a game item distributing state.
[30] In some embodiments, the cache updating module is specifically configured to store, in response to that the game item is a newly-appearing game item and the current game is in the game item distributing state, the detection result of the newly-appearing game item into the cache.
[31] In some embodiments, the detection result includes identification information of a game item, and the game state switching module is specifically configured to obtain, for each of the one or more detection results in the cache, identification information of a game item involved in the detection result in the cache; determine whether a game processing result can be derived from the identification information obtained for each of the one or more detection results; and determine, in response to that a game processing result can be derived from the identification information obtained for each of the one or more detection results, that the one or more detection results meet the predetermined condition.
[32] In some embodiments, the game state switching module is specifically configured to before determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results, execute, in response to that a number of the one or more detection results stored in the cache reaches a predetermined number, the step of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results.
[33] In some embodiments, the detection result includes position information of a game item, and the detection apparatus provided in the present disclosure further includes an acquiring module, configured to acquire sequential information of the cache storing each detection result; a second determining module, configured to determine a target sub-area corresponding to the detection result based on the sequential information and a pre-established correspondence between sequential information of game items and a plurality of sub-areas within the game area; and the game state switching module is configured to switch the current game to a halt state, in response to determining that the game item is not in the target sub-area according to the position information of the game item.
[34] In some embodiments, the game area includes a game item operating area, and the first detecting module is specifically configured to detect, in response to that the game item is moved to the game item operating area, the first position information of the game item and the second position information of the operation object for the game item.
[35] In a third aspect of the present disclosure, there is provided a device for detecting a game item. The device includes a memory and a processor storing computer instructions that can be read by the processor, and the processor, when reading the computer instructions, is caused to execute the method according to any embodiment of the first aspect.
[36] In a fourth aspect of the present disclosure, there is provided a system for detecting a game item. The system includes an image capture device, configured to acquire video stream of a game playing in a game area; a processor, connected with the image capture device to obtain the video stream of the game; and a processor storing computer instructions that can be read by the processor. The processor, when reading the computer instructions, is caused to execute the method according to any embodiment of the first aspect.
[37] In a fifth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium, which stores computer readable instructions for causing the computer to execute the method according to any embodiment of the first aspect.
[38] According to the method of detecting a game item provided by the present disclosure, first position information of a game item within a game area and second position information of an operation object for the game item are detected based on an acquired video stream of a game playing in the game area, and in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, the game item is determined as in an operation completed state. By realizing the detection on game scenes based on computer vision and thus detecting a stable state of a game item, the method of the present disclosure can effectively decrease the interference of motion or being sheltered on the recognition for the game item, which is beneficial to improve the accuracy of subsequent recognition task(s). BRIEF DESCRIPTION OF THE DRAWINGS
[39] In order to more clearly illustrate the specific embodiments of the present disclosure or the technical solutions in the prior art, accompanying drawings to be referred in the specific embodiments or the description of the prior art will be briefly introduced below. Apparently, the accompanying drawings in the following description show some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative endeavor.
[40] FIG. 1 is a schematic structure diagram illustrating a system for detecting a game item according to some embodiments of the present disclosure.
[41] FIG. 2 is a schematic diagram illustrating a game area according to some embodiments of the present disclosure.
[42] FIG. 3 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[43] FIG. 4 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[44] FIG. 5 is a schematic diagram illustrating an image in a video stream of a game area according to some embodiments of the present disclosure.
[45] FIG. 6 is a schematic diagram illustrating an image in a video stream of a game area according to some embodiments of the present disclosure.
[46] FIG. 7 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[47] FIG. 8 is a schematic diagram illustrating an image in a video stream of a game area according to some embodiments of the present disclosure.
[48] FIG. 9 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[49] FIG. 10 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[50] FIG. 11 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[51] FIG. 12 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[52] FIG. 13 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[53] FIG. 14 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[54] FIG. 15 is a flow chart illustrating a method of detecting a game item according to some embodiments of the present disclosure.
[55] FIG. 16 is a block diagram illustrating an apparatus for detecting a game item according to some embodiments of the present disclosure.
[56] FIG. 17 is a block diagram illustrating an apparatus for detecting a game item according to some embodiments of the present disclosure.
[57] FIG. 18 is a structural block diagram illustrating an apparatus for detecting a game item according to some embodiments of the present disclosure.
[58] FIG. 19 is a structural block diagram illustrating a system that is applicable for implementing the detection method of the present disclosure. DETAILED DESCRIPTION OF THE EMBODIMENTS
[59] The technical solutions of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings. Apparently, the described implementations are part of the implementations of the present disclosure, rather than all of the implementations. Based on the implementations in the present disclosure, all other implementations obtained by those of ordinary skill in the art without creative endeavor shall fall within the protection scope of the present disclosure. In addition, the technical features involved in the different embodiments of the present disclosure described below can be combined with each other as long as they do not conflict with each other.
[60] A complete game can often be divided into a plurality of different phases according to different player actions. For example, for a tabletop card game, the game process can include players placing game coins, a dealer dealing cards, and collecting the game coins according to card results, and so on. For each game phase, the game phase often has a certain game rule. For example, in a phase of dealing cards, a certain requirement is generally provided on the order of dealing cards.
[61] A detection system based on computer vision can divide the entire game process into a plurality of different game states according to the different game phases. For example, the cards dealing phase can correspond to a "game item distributing state", and the game coins collecting phase can correspond to a "result processing state". At a logic level, the detection system can set corresponding detection logic based on different game phases, so as to determine whether a player or a dealer is performing a game action according to a proper game rule when the game is in the current state.
[62] In this scenario, determining of the detection system on one or more actions of a player and switching of game states need to rely on information such as coordinates of a player's hand, the number of cards, a position of a card, a points number of a card and the like on a game table. However, in a video stream of a real game scenario acquired by a camera, taking the dealer dealing cards phase as an example, where a card is in a high-speed moving state, and the dealer's hand and the card will be sheltered. When the detection system uses these images with high-speed motion and being sheltered for detection and recognition, the accuracy of a recognition result is relatively low, which in turn results in lower accuracy of subsequent business logic judgment. For example, when the determining of switching game states is realized based on an image involving a card with high-speed motion and being sheltered, the accuracy of recognizing a game state is relatively low.
[63] Based on the above, the implementations of the present disclosure provide a method, an apparatus, an electronic device and a storage medium for detecting a game item, with an objective of recognizing a stable state of a game item based on visual information during the game, which is beneficial to improving the accuracy of subsequent business processing.
[64] In some implementations, examples of the present disclosure provide a method of detecting a game item, which can be applied to a game detection system to realize visual detection on game scenarios. FIG. 1 shows some implementations of the detection system of the present disclosure, which will be described below in conjunction with FIG. 1.
[65] As shown in FIG. 1, in some exemplary scenarios, the game of the present disclosure can be a tabletop game as an example. In this scenario, at least one game table 100 can be provided. In the implementation of the present disclosure, the number of game tables 100 is not limited. For example, two game tables 100 are shown in FIG. 1. It can be understood that the game table 100 indicates a carrier for players to play games, and the table top of the game table 100 indicates a game area where the players play games.
[66] The detection system includes an image capture device 200 arranged above or on the sides of the game table 100, and the image capture device 200 is configured to obtain video stream data of the game area.
[67] In some implementations, the image capture device 200 includes at least one camera located directly above the game table 100, so that the video stream of the game area can be collected in a top view. In some implementations, in addition to the camera directly above, the image capture device 200 can also include at least one camera located on the sides of the game table 100, so that video stream data of the game from other angles can be collected. It surely can be understood by those skilled in the art, and will not be detailed herein.
[68] Still referring to FIG. 1, in an example of the present disclosure, the detection system further includes a vision unit 300 provided on each game table 100. The vision unit 300 can perform corresponding image processing operations, such as image detection and the like, based on the video stream data collected by the image capture device 200. The vision unit 300 can be a unit module that is pre-trained based on a learnable machine learning model or a neural network model and can be used to identify and detect a target object in 5; image data. The vision unit 300 can be an end-side processing device deployed on the side of the game table.
[69] The detection system also includes a processing unit 400, which is respectively connected to the vision units 300 of a plurality of game tables 100, so that the processing unit 400 can obtain a processing result of each of the vision units 300, and perform corresponding steps according to the received processing results, for example, issuing control instructions to the vision unit 300 and so on. The processing unit may be deployed in a central control device of a game place, or in a remote server for performing data analysis for a plurality of game tables in the game place.
[70] In an exemplary scenario, when a vision unit 300 of a game table 100 detects that a player is not playing the game according to the rule of the game based on the video stream data collected by the image capture device 200, the vision unit 300 can send warning information to the processing unit 400. At the same time, the vision unit 300 can switch the current game to the halt state. After receiving the warning information, the processing unit 400 in the background can issue a warning to a dealer to give a risk alarm.
[71] In the detection system exemplified in the present disclosure, a vision unit 300 is correspondingly provided for each game table, thereby facilitating the distributed management of the detection system.
[72] However, it can be understood that the detection system of the implementation of the present disclosure is not limited to the example in FIG. 1, and other implementations are also possible based on the example in FIG. 1. For example, the vision unit 300 and the processing unit 400 can be integrated in the same processing module, that is, the vision unit 300 is not set on the game table 100 at the front end, but is integrated with the processing unit 400 in the background, and performs image data processing operations in the background. It surely can be understood by those skilled in the art, and will not be detailed herein.
[73] FIG. 2 shows a schematic diagram illustrating a game area of a game table in some implementations of the present disclosure, which will be described below in conjunction with FIG. 2.
[74] As shown in FIG. 2, the entire tabletop area of the game table 100 indicates the game area, and the video stream for the game table during the game can be captured by an image capture device set above the game table 100. In the example in FIG. 2, the game area can be divided into a game item operating area 110 and a currency token operating area 120. Taking a tabletop game as an example, the game item operating area 110 can be used for distributing and placing a game item, and the currency token operating area 120 can be used for a player to place a currency token.
[75] Further, the game item operating area 110 may include a plurality of sub-areas. For example, in an exemplary game scenario where each player is to be distributed with one or more game items and accordingly provided with a sub-area at a position close to the player. For another example, the game item operating area 110 may include two sub-areas, namely a sub-area 111 and a sub-area 112 shown in the FIG. 2. It surely can be understood by those skilled in the art that the number of sub-areas can be set according to a specific game scenario, which will not be detailed herein.
[76] Based on the architecture of the detection system illustrated in FIG. 1 and FIG. 2, the method of detecting a game item of the implementation of the present disclosure will be described below. The method exemplified in the present disclosure can be executed by the above-mentioned vision unit 300, can also be executed by the processing unit 400, or jointly executed by the vision unit 300 and the processing unit 400, which is not limited in the present disclosure.
[77] As shown in FIG. 3, in some implementations, the method of detecting a game item of the example of the present disclosure includes the following step.
[78] At step 310, first position information of a game item within a game area, and second position information of an operation object for the game item are detected based on an acquired video stream of a game playing in the game area.
[79] Specifically, during the game, video stream data of the game process can be acquired with the image capture device set in the game area, and transmitted to the visual unit 300 described above, or to the visual unit 300 and the processing unit 400 described above. When the above-mentioned video stream is obtained, the visual unit 300, or the visual unit 300 and the processing unit 400, can perform analysis on the video stream to recognize information of an object on the game table and/or information of an event that occurs on the game table.
[80] A game item can be considered as a game prop, which is necessary for proceeding the game. At the same time, each game item is further provided with respective identification information which represents a prop attribute of the game item. For the purpose of distinguishing game props of the same kind, a plurality of different kinds of attribute information can be provided for the game items of one kind. The identification information described in the present disclosure refers to the attribute information of the game item.
[81] In an example, taking a tabletop card game as an example of a game scenario, the game item can be a card, and the suit and points number on the card is the identification information of the card, such as ace of hearts, 2 of spades, and so on.
[82] In another example, taking a tabletop mahjong game as an example of a game scenario, the game item can be a mahjong block, and card face information of the mahjong block is the identification information, such as one character, nine bamboo, and so on.
[83] It surely can be understood by those skilled in the art that the implementations of the present disclosure are not limited to the above examples, and can also be applied to any other game scenarios suitable for implementation, which is not limited in the present disclosure.
[84] The operation object for the game item refers to a target object for operating the game item. For example, for a card game, in general, a dealer is to deal cards and, when dealing cards, his/her hand is the operation object. For another example, a dealer can hold a tool for dealing cards, and, when dealing cards, the tool he/she holds is the operation object.
[85] It should be noted that in an example scenario, taking the distributing game items phase as an example, a game item is taken out from game item depository and distributed to the game item operating area 110 by a dealer. On the tabletop, the game item is driven to move from the game item depository to the game item operating area 110 by the dealer's hand, and then the dealer's hand leaves the game item, and the game item stays still in the game item operating area. During the entire process, for a moving game item which is moving at a high speed and is sheltered by a hand, if images captured during the game item moving are used to detect identification information of the game item, the accuracy of detection result is relatively low. When the game item is finally motionless in the game item operating area 110, that is, it is not moving and not being sheltered if an image captured at this time is used for detecting and recognizing the identification information, the accuracy of recognizing the identification information will be improved.
[86] Therefore, in the implementations of the present disclosure, a state in which the game item is motionless in the game area after being operated by the operation object is defined as an operation completed state. In the implementations of the present disclosure, by determining whether the game item is in the operation completed state, and by using an image corresponding to a stable state of the game item to subsequently perform detection and recognition of the identification information as well as business logic judgment, the accuracy of the detection system can be improved.
[87] In step 310, based on the video stream of the game process, for example, with image detection technology, the first position information of the game item within the game area and the second position information of the operation object for the game item are obtained.
[88] In an exemplary game scenario, still taking the distributing game items phase as an example, during distributing a game item, a hand of a dealer who distributes the game item is the operation object. By performing image detection on one or more images in the video stream, first position information of the game item in each image can be obtained, and second position information of the dealer's hand can be obtained at the same time.
[89] The first position information indicates a position of the game item in an image of the video stream, and the second position information indicates a position of the operation object for the game item in an image of the video stream. In some implementations, the first position information may indicate as a boundary box of the game item, and the second position information may indicate a boundary box of the operation object. In other implementations, the first position information may indicate position information of a key point of the game item, and the second position information may indicate position information of a key point of the operation object.
[90] It can be understood that the first position information and the second position information can also be any other position information suitable to represent a position, which is not limited to the above examples and will not be detailed herein.
[91] At step 320, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, it is determined that the game item is in an operation completed state.
[92] It can be seen from the above that, in the exemplary distributing game items phase, the game item being in the operation completed state indicates that the operation object such as the hand of a dealer or the handheld tool has left from the game item, i.e., the operation object no longer overlaps with the game item. Therefore, based on the first position information of the game item and the second position information of the operation object, it can be determined whether the positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other, and further determined whether the game item is in the operation completed state.
[93] In some implementations, based on the detection of the game item and the operation object performed on the video stream, a first boundary box of the game item within the game area and a second boundary box of the operation object for the game item can be obtained. The first boundary box and the second boundary box are used as the above-mentioned first position information and second position information respectively. The first boundary box and the second boundary box may be rectangular boxes containing vertices, length information and width information of the boundary boxes.
[94] The position relationship of overlapping with each other and separating from each other between the game item and the operation object thereof may be determined by an overlapping and separating state of the corresponding first boundary box and second boundary box. When the first boundary box and the second boundary box at least partially overlap, it indicates that the game item and the operation object thereof overlap, i.e., the game item is being distributed and is not in the operation completed state. When the first boundary box '7 and the second boundary box do not overlap, it indicates that the operation object has left the game item, i.e., the game item has been distributed and is in the operation completed state. In this regard, the present disclosure will be described in detail in the following implementation of FIG. 4, and will not be detailed herein.
[95] In some other implementations, based on the detection of the game item and the operation object performed on the video stream, coordinates of a first key point of the game item within the game area and coordinates of a second key point of the operation object for the game item can be obtained. The coordinates of the first key point and the second key point are used as the first position information and the second position information described above respectively. The first key point can include one or more key points of the game item, such as at least one of a center point, a corner point of the game item. The second key point can include one or more key points of the operation object, such as at least one of a center point, an edge contour point of the operation object.
[96] The position relationship of overlapping with each other and separating from each other between the game item and the operation object thereof may be determined by a distance between the first key point and the corresponding second key point. For example, a distance threshold can be predetermined based on priori knowledge or limited trials, and when the distance between the first key point and the second key point is less than the distance threshold, it represents that the game item and operation object thereof are overlapping with each other. Conversely, when the distance between the first key point and the second key point is equal to or greater than the distance threshold, it represents that the game item and operation object thereof are separating from each other.
[97] It can be understood that the first position information and the second position information can also be any other position information suitable to represent a position and, accordingly, those skilled in the art can determine the positional relationship between the game item and the operation object based on a specific implementation of the position information, which will not be detailed herein.
[98] A first image in which the positional relationship between the game item and the operation object thereof is changed from overlapping with each other to separating from each other can be detected form the video stream and taken as a critical frame. An image prior to the critical frame represents that the game item is in a moving or being sheltered state, and an image after the critical frame represents that the game item is in an operation completed state. Therefore, the image after the critical frame can be used to perform relevant detection and recognition on the game item, thus improving the accuracy of detecting. In this regard, detailed description will be given in the following implementations of the present disclosure and will not be detailed herein.
[99] It can be seen from the above that, in the detection method provided in the implementations of the present disclosure, by realizing the detection on game scenes based on computer vision and thus detecting a stable state of a game item, the method of the present disclosure can effectively decrease the interference of motion or being sheltered on the recognition for the game item, which is beneficial to improve the accuracy of subsequent recognition task(s).
[100] Some specific implementations of the detection method provided in the present disclosure are illustrated in FIG. 4 and FIG. 6. The detection method of the present disclosure is described in detail below in conjunction with FIG. 4 and FIG. 6.
[101] As shown in FIG. 4, in some implementations, the method of detecting a game item provided in examples of the present disclosure includes the following steps.
[102] At step 410, a first boundary box of the game item within the game area and a second boundary box of the operation object are obtained by performing a detection on the video stream.
[103] At step 420, in response to that there is no overlapping area between the first boundary box and the second boundary box, it is determined that the positional relationship between the game item and the operation object is separating from each other.
[104] At step 430, in response to that there is an overlapping area between the first boundary box and the second boundary box, it is determined that the positional relationship between the game item and the operation object is overlapping with each other.
[105] Specifically, taking the game item distributing scenario as an example, FIG. 5 and FIG. 6 respectively show one image included in the video stream, and specific description of which will be given below in conjunction with FIG. 5 and FIG. 6.
[106] FIG. 5 and FIG. 6 are top views of a game table, and a game area of the game table can refer to FIG. 2. A dealer 700 performs an operation of distributing game items in the game area, that is, taking out a game item 710 from a game item depository box 720 and moving the game item 710 to the game item operating area 110 for stationary placement. By performing image detection on the video stream, a first boundary box 711 of the game item 710 on the game table and a second boundary box 731 of the operation object 730 for the game item can be obtained. In this example, the operation object 730 for the game item is the hand of the dealer 700.
[107] In FIG. 5, the game item 710 is in the process of moving, that is, the operation object 730 is driving the game item 710 to move on the game table, so there is an overlapping area between the first boundary box 711 and the second boundary box 731. At this time, the positional relationship between the game item 710 and the operation object 730 is overlapping with each other.
[108] While in FIG. 6, there is no overlapping area between the first boundary box 711 of the game item 710 and the second boundary box 731 of the operation object 730, indicating that the operation object 730 has left the game item 710 and the game item 710 has been distributed, at this time the positional relationship between the game item 710 and the operation object 730 is separating from each other.
[109] In the implementations of the present disclosure, for each game item involved in the video stream of the game area, whether the game item is in the operation completed state can be determined through the positional relationship between the game item and the operation object.
[110] Specifically, for each game item, in response to that the game item and the operation object are detected as separating from each other in a first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame, it is determined that the game item is in the operation completed state at the time corresponding to the first video frame.
[111] It can be understood that images included in the video stream are a plurality of images in a temporally sequence, which represent dynamic changes in the positional relationship between the game item and the operation object. The detection system performs detection on each image in turn based on the above-mentioned detection method to obtain the positional relationship between the game item and the operation object in the corresponding image. In the implementations of the present disclosure, an image in which the positional relationship between the game item and the operation object are separating from each other is defined as the first video frame.
[112] For example, in the image shown in FIG. 6, the game item and the operation object are detected as separating from each other by the detection system, and the image shown in FIG. 6 can be determined as a first video frame. After determining the first video frame, the positional relationship between the game item and the operation object in a preceding frame of the image can be obtained by the detection system. For example, a preceding frame of the first video frame is shown in FIG. 8, and in the preceding frame shown in FIG. 8, the game item and the operation object are overlapping with each other.
[113] In this case, it represents that the positional relationship between the game item and the operation object changes from overlapping with each other shown in FIG. 8 to separating from each other shown in FIG. 6. At this time, the time corresponding to the image shown in FIG. 6 can be determined as the time when the game item is in the operation completed state.
[114] It should be noted that the preceding frame of the first video frame may be a previous image immediately before the first video frame in the video stream, or may be an image before the first video frame and apart from the first video frame by at least one frame in the video stream, which is not limited in the present disclosure.
[115] Itcan be seen from the above that in the detection method provided in the implementations of the present disclosure, an overlapping state between boundary boxes of the operation object and the game item is used for determining whether the game item is in the operation completed state, which simplifies the calculation process and improves the efficiency of the detection system.
[116] In some implementations, in order to further improve the accuracy of detecting a stable state of a game item, the position information in the preceding frame of the first video frame can be combined to determine whether the game item is in the operation completed state, which will be described in detail below in conjunction with FIG. 7.
[117] As shown in FIG. 7, in some implementations, the method of detecting a game item provided in the examples of the present disclosure includes the following steps.
[118] At step 701, in response to detecting that the game item and the operation object are separating from each other in the first video frame in the video stream, and are overlapping with each other in the preceding frame of the first video frame in the video stream, third position information of the game item in the preceding frame is obtained.
[119] At step 702, in response to that the first position information and the third position information meet a predetermined position condition, it is determined that the game item is in the operation completed state at the time corresponding to the first video frame.
[120] In some implementations of the present disclosure, in a case of detecting that the positional relationship between the game item and the operation object converts from overlapping with each other in the preceding frame to separating from each other in the first video frame, the third position information of the game item in the preceding frame is further obtained rather than directly determining the time corresponding to the first video frame as the time when the game item is in the operation completed state.
[121] The third position information indicates a position of the game item in the preceding frame. Similar to the above-mentioned first position information, the detection system can obtain the third position information of the game item in the preceding frame by performing a detection on the preceding frame. In some implementations, the third position information may indicate a boundary box of the game item, or position information of a key point of the game item.
[122] Taking the above-mentioned game items distributing scenario as an example, the first video frame is shown in FIG. 6 and the preceding frame of the first video frame is shown in FIG. 8. In the first video frame shown in FIG. 6, the detection system determines that the game item and the operation object are separating from each other. At the same time, in the preceding frame shown in FIG. 8, the detection system determines that the game item and the operation object are overlapping with each other. At this time, the detection system can obtain the third position information of the game item from the detection on the preceding frame shown in FIG. 8.
[123] As shown in FIG. 8, in the preceding frame, the third position information of the game item 710 obtained from the detection indicates a third boundary box 711'. In the implementations of the present disclosure, considering that the position of the game item in the preceding frame should be almost the same as the position of the game item in the first video frame when the game item in the first video frame is in the operation completed state, therefore, after the third boundary box 711' is obtained, whether the game item 710 in the first video frame is in the operation completed state can be determined based on the positions of the first boundary box 711 and the third boundary box 711'.
[124] The first boundary box 711 represents the position information of the game item 710 in the first video frame (FIG. 6), and the third boundary box 711' represents the position information of the game item 710 in the preceding frame (FIG. 8).
[125] In an example, if the positions of the first boundary box 711 and the third boundary box 711' are the same, it indicates that the position of the game item 710 has not changed in the two images, thus determining that the first position information and the third position information meet the predetermined position condition, and the game item is in the operation completed state at the time corresponding to the first video frame shown in FIG. 6.
[126] In another example, if a distance between the first boundary box 711 and the third boundary box 711' is less than a predetermined distance threshold, it indicates that the position of the game item 710 remains almost unchanged in the two images, thus determining that the first position information and the third position information meet the predetermined position condition, and the game item 710 in the first video frame is in the operation completed state. If the distance between the first boundary box 711 and the third boundary box 711! is greater than the predetermined distance threshold, it indicates that the position of the game item 710 has changed significantly, and the game item 710 in the first video frame is not in the operation completed state.
[127] It can be seen from the above that in the implementations of the present disclosure, the position information of the game item in the preceding frame is further combined to determine whether the game item is in a stable state, so as to improve the accuracy and reliability of detecting the stable state of the game item.
[128] in some implementations, in a real game scenario, game participants often include a dealer and a player, and operations on a game item can only be performed by the dealer. For example, in an example game scenario, an operation of distributing game items can only be performed by the dealer, and a player can only receive one or more game items distributed to them by the dealer. In such a scenario, in order to determine whether a violation action occurs in the distributing process, it is necessary for the detection system to perform a detection on the operation object currently operating the game item. Specific description will be given below in conjunction with FIG. 9.
[129] As shown in FIG. 9, in some implementations, the method of detecting a game item provided in the examples of the present disclosure includes the following steps.
[130] At step 910, whether the operation object for the game item is a target operation object is determined based on the video stream.
[131] Specifically, at an initial stage of detection, it is necessary to first determine whether the operation object currently operating the game item is the target operation object. The target operation object is an object that is allowed to perform one or more operations on the game item, such as a hand-held prop or a hand of a dealer who deals cards in a card game or the like.
[132] As shown in FIG. 10, in some embodiments, the step 910 can include the following steps.
[133] At step 911-1, a direction of operating the game item by the operation object is determined based on the video stream.
[134] At step 911-2, in response to that the direction of operating is a predetermined direction, it is determined that the operation object is the target operation object.
[135] Specifically, as shown in FIG. 5, in an example game scenario, a dealer and a player sit face to face on two sides of the game table, that is, for an image capture device set above the table, the dealer operates a game item in a different direction from that of the player.
[136] Therefore, the direction of operating the game item by the current operation object can be obtained by performing image detection on the video stream, where the direction of operating can represent a moving direction or angle of the game item. The predetermined direction is a predetermined moving direction or angle range for the game item. In a case that the detected direction of operating matches the predetermined direction, it represents that the hand currently operating the game item is the hand of the dealer, that is, the target operation object. On the contrary, in a case that the detected direction of operating does not match the predetermined direction, it represents that the hand currently operating the game item is not the hand of the dealer, that is, not 1 i the target operation object.
[137] As shown in FIG. 11, in some other implementations, the step 910 can include the following steps.
[138] At step 912-1, an operation object correlated with the game item is detected based on the video stream.
[139] At step 912-2, a face object correlated with the operation object is detected based on the video stream.
[140] At step 912-3, in response to that the face object is a predetermined face object, it is determined that the operation object for the game item is the target operation object.
[141] Specifically, based on a correlation algorithm, the operation object correlated with the current game item, such as a hand or a hand-held prop and the like, can be detected by the detection system. At the same time, a face object correlated with the operation object, for example, a face correlated with the hand operating the game item, can be further detected by the detection system with the correlation algorithm based on the video stream.
[142] The predetermined face object can be a face image of a predetermined target operation object. In a case that the detected face object matches the predetermined face object, it represents that the operation object currently operating the game item is the dealer's hand or hand-held prop, that is, the target operation object. In a case that the detected face object does not match the predetermined face object, it represents that the object currently operating the game item is not the dealer's hand or hand-held prop, that is, not the target operation object.
[143] At step 920, in response to that the operation object for the game item is the target operation object, the first position information of the game item and the second position information of the target operation object are detected.
[144] After determining that the operation object for the current game item is the target operation object, the above-mentioned detection on a stable state of the game item can be performed. For example, in the above-mentioned example, when determining that the operation object currently operating the game item to move is the dealer's hand, whether a card is in the operation completed state can be determined according to the video stream. Those skilled in the art can refer to the foregoing implementations for details which will not be detailed herein.
[145] It can be seen from the above that the detection method provided by the implementations of the present disclosure, through the detection and determination on the operation object for the game item, the interference of non-target personnel on the detection can be avoided, and the accuracy and reliability of a detection result can be improved.
[146] When it is determined that the game item is in the operation completed state through the above-mentioned method, information about the game item can be detected and recognized based on the video stream involving the game item in the operation completed state. A state of the game can be switched based on the recognition result, which will be described in detail below.
[147] in a tabletop game scenario, a dealer is often needs to distribute a plurality of game items in sequence. For example, in one implementation, as shown in FIG. 5 and FIG. 2, the dealer needs to distribute four game items and a specific order of distributing is as follows: the first game item is taken out from a game item depository box and moved to the sub-area 111, the second game item is taken out from the game item depository box and moved to the sub-area 112, the third game item is taken out from the game item depository box and moved to the sub-area 111, the fourth game item is taken out from the game item depository box and moved to the sub-area 112.
[148] In this example scenario, for each image included in the video stream, whether each game item involved in the image is in the operation completed state can be determined through the above-mentioned implementation, and then a detection can be performed on a game item in the operation completed state. A game item being in the operation completed state indicates that the game item is stable and motionless, and there is no interference of being sheltered or motion, such that the detection of, for example, identification information, a position, etc. is more accurate. Specific description will be given in conjunction with FIG. 12 below.
[149] As shown in FIG. 12, in some implementations, the method of detecting a game item provided in the examples of the present disclosure includes the following steps.
[150] At step 1210, a current detection result of each game item that is in the operation completed state in a video frame is obtained by performing a detection on each video frame of the video stream sequentially.
[151] At step 1220, a historical detection result in a cache is updated based on a comparison between the current detection result and the historical detection result stored in the cache.
[152] At step 1230, in a case that one or more detection results in the cache meet a predetermined condition, the game is switched to a result processing state.
[153] It can be seen from the above that, in the present disclosure, the game process is divided into a plurality of different game states by the detection system, such as a game item distributing state corresponding to a phase of distributing game items by a dealer, a result processing state corresponding to a phase of collecting game coins, etc.
[154] When the game is in the game item distributing state and the game item within the game area is in the operation completed state, it represents that the game item has been distributed, and a detection can be performed on the game item to obtain a detection result thereof
[155] For sequential images included in a video stream, the number of game items in the operation completed state may vary from image to image when the game is in the game item distributing state.
[156] For example, taking the above-mentioned game item distributing scenario as an example, when a dealer has distributed a first game item and is to distribute a second game item, it is detected that only the first game item is in the operation completed state, such that a current detection result of the first game item can be obtained through image detection. The detection result can include identification information and position information or the like of a game item. Taking that the game item is a card as an example, the identification information can include the points number and suit of a card, and the position information can include coordinates of a card.
[157] In some implementations, since the dealer just starts distributing the first game item, there are no historical detection results in a system cache at this time. When the detection result of the first game item is obtained, in response to that there are no historical detection results in the cache, the current detection result of the first game item can be saved into the cache. At the same time, it also indicates that the game enters a phase of distributing game items from other phases (e.g. a phase of a player placing game coins), so that the detection system can switch the current game to the game item distributing state.
[158] Similarly, during the dealer has distributed the second game item and is to distribute a third game item, it is detected that both the first game item and the second game item are in the operation completed state. Through the image detection, current detection results of the first game item and the second game item are obtained. At this time, a historical detection result of the first game item is stored in the cache.
[159] The current detection results can be compared with the historical detection result stored in the cache, and in some implementations, whether the game item corresponding to the current detection result is newly appearing or has been stored in the cache can be determined based on position information of the game item. The following is illustrated in conjunction with the implementation shown in FIG. 13.
[160] As shown in FIG. 13, in some implementations, in the method of detecting a game item provided in the examples of the present disclosure, the process of updating the historical detection result stored in the cache includes the following steps.
[161] At step 1310, whether the game item is a newly-appearing game item is determined according to a comparison between position information of the game item in the current detection result and each position information in the historical detection result.
[162] At step 1320, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item is stored into the cache.
[163] Still taking the above scenario as an example, when the current detection results of the first game item and the second game item are detected, there is the historical detection result of the first game item stored in the cache at this time. The current detection result of the first game item and the current detection result of the second game item can be compared with the historical detection result respectively based on the position information.
[164] It can be understood that since the position of the first game item is not changed, the current detection result of the first game item should be the same as or similar to the historical detection result, thereby determining that the first game item is not a newly-appearing game item. However, the position of the second game item is different from the position of the first game item, so the current detection result of the second game item is quite different from the historical detection result, such that the second game item can be determined as a newly-appearing game item.
[165] After determining that the second game item is a newly-appearing game item, the detection result of the second game item can be stored into the cache. For example, the detection result includes the identification information and position information of the second game item.
[166] In some implementations, when a newly-appearing game item is stored into the cache, whether the current game is in the game item distributing state can be further determined, and only when the game is in the game item distributing state can the distributing of game items be allowed. For example, in the above example scenario, the game is switched to the game item distributing state by the system when the detection result of the first game item is stored into the cache, so that when storing the detection result of the second game item into the cache, the system determines that the current game is in the game item distributing state, and then the detection result of the second game item can be stored into the cache. However, when the current game is not in the game item distributing state, it indicates that an abnormal game operation occurs. The system can switch the game to a halt state and send an alert message to the background at the same time.
[167] For a game in a real scenario, with continuously distributing game items, one or more historical detection results stored in the cache are iteratively updated by continuously performing the above process until the one or more detection results in the cache meet a predetermined condition, which indicates that the game 1) items distributing phase of the game is ended and the game enters into a game coins settlement phase, so that the game can be switched to the result processing state. The following description will be given in conjunction with the implementation shown in FIG. 14.
[168] As shown in FIG. 14, in some implementations, in the method of detecting a game item provided in the examples of the present disclosure, the process of determining that the one or more detection results meet the predetermined condition includes the following steps.
[169] At step 1410, for each of the one or more detection results in the cache, identification information of a game item involved in the detection result in the cache is obtained.
[170] At step 1420, whether a game processing result can be derived from the identification information obtained for each of the one or more detection results is determined.
[171] At step 1430, in response to that a game processing result can be derived from the identification information obtained for each of the one or more detection results, it is determined that the one or more detection results meet the predetermined condition.
[172] Specifically, for each of the one or more detection results stored in the cache, the detection system can obtain identification information of a game item involved in the detection result from the cache. Still using the game scenario described above as an example, one or more detection results in the cache include identification information of each game item, such as the points number, suit and other information of a card. Then, according to the identification information of each game item, it is determined that whether a game processing result can be obtained.
[173] in one example, taking a tabletop card game as an example of a game scenario, a business-level rule corresponding to a game processing result of the card game includes: the number of cards within the game area is 4, and from a sum of points number of two cards within the sub-area 111 and a sum of points number of two cards within the sub-area 112, the party closest to 9 wins. Therefore, the sum of points number of the two cards in the sub-area 111 and the sum of points number of the two cards in the sub-area 112 are determined based on the obtained identification information of each card. Whether there is a party who received cards with a sum of points number closer to 9 can be determined by comparing the two sums of points number. If the determination result is yes, it is determined that a game processing result can be obtained; and if the two sums of points number are the same, it is determined that a game processing result cannot be obtained.
[174] Those skilled in the art can understand that the above example is only used to illustrate the method provided in the embodiments of the present disclosure, and does not limit the present disclosure. The process of determining whether a game processing result can be derived from the identification information can be set according to specific game scenarios, which is not limited in the present disclosure.
[175] When the detection system determines that a game processing result can be obtained, it indicates that the real game scenario has entered the game coins settlement phase, such that the detection system can switch the current game from the game item distributing state to a result processing state.
[176] It can be seen from the above that in the method of detecting a game item provided by the embodiments of the present disclosure, after determining that the game item is in the operation completed state, a video frame involving the game item in the operation completed state is used for detecting the identification information of the game item, which reduces the interference of being sheltered or motion on the image detection and recognition, improves the accuracy of detection, and at the same time realizes automatic switching of game states based on the detected identification information and improves the reliability and stability of the detection system.
[177] In some game scenarios, the number of game items is usually limited. For example, in an exemplary card game scenario, a business-level rule corresponding to a predetermined condition includes: the number of cards within the game area is 4 and the party who received cards with a sum of points number closest to 9 wins. In such a scenario, a dealer is expected to deal 4 cards during the game items distributing phase. Therefore, in the implementations of the present disclosure, the step of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results can be performed when the number of detection results stored in the cache reaches a predetermined number.
[178] Specifically, in the aforementioned implementations in FIG. 12 and FIG. 13, the detection system sequentially stores detection results of game items in the operation completed state into the cache until the number of one or more detection results in the cache reaches the predetermined number. For example, in the above example scenario, the number of the one or more detection results in the cache reaching 4, indicating that the number of game items in the operation completed state is 4, and corresponding to the real game scenario, it indicates that a dealer has distributed and is to calculate a game processing result based on identification information of the game items. Thus, at a logic level of the detection system, when it is determined that the number of one or more detection results in the cache reaches the predetermined number, the process of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results stored in the cache is started, which is not described in detail in the present disclosure.
[179] It can be seen from the above that, in the implementations of the present disclosure, whether a game processing result can be obtained is determined when the number of detection results in the cache reaches the predetermined number, rather than when a detection result is stored each time, which reduces amount of calculation of the detection system and improves the efficiency of the system.
[180] In some implementations, in the game items distributing phase, a dealer is often expected to distribute game items according to a predetermined distributing order. For example, in the game scenario in the above example, a dealer distributes four game items in an order of: sub-area 111 - sub-area 112 - sub-area 111 sub-area 112. The detection system needs to determine whether the distributing order of the dealer is correct, and when it is determined that the distributing order is incorrect, the detection system can promptly alarm or interrupt the game to avoid causing losses. The following is illustrated in conjunction with the implementation shown in FIG. 15.
[181] As shown in FIG. 15, in some implementations, the method of detecting a game item provided in the examples of the present disclosure further includes the following steps.
[182] At step 1510, sequential information of the cache storing each detection result is acquired.
[183] Specifically, in the above-mentioned implementations, for each game item, a sequence corresponding to the detection result of the game item can be recorded while storing the detection result of the game item in the cache, and the sequence can be taken as a distributing sequence of each game item. Therefore, when the detection system stores one or more detection results in sequence, sequential information of a certain detection result can be obtained.
[184] At step 1520, a target sub-area corresponding to the detection result is determined based on the sequential information and a pre-established correspondence between sequential information of game items and a plurality of sub-areas within the game area.
[185] Specifically, taking the above-mentioned scenario as an example, as shown in FIG. 2, the game item operating area 110 within the game area includes a sub-area 111 and a sub-area 112. In a real game scenario, a dealer needs to distribute four game items to a corresponding sub-area in an order of "sub-area 111 - sub-area 112 - sub-area 111 - sub-area 112".
[186] Therefore, at the logic level of the detection system, the correspondence between sequential information of game items and sub-areas can be established in advance. For example, in the present implementation, the pre-established correspondence can includes: a sub-area corresponding to the first game item is the sub-area 111, a sub-area corresponding to the second game item is the sub-area 112, and a sub-area corresponding to the third game item is the sub-area 111, and a sub-area corresponding to the fourth game item is the sub-area 112.
[187] When storing a detection result of a certain game item, the detection system can determine sequential information corresponding to the game item according to the sequential information of storing the detection result in the cache. For example, the detection system determines that sequential information of the currently stored detection result is 2, that is, representing that the game item corresponding to the current detection result is the second game item. Therefore, according to the above-mentioned correspondence, it can be determined that the target sub-area corresponding to the second game item is the sub-area 112.
[188] At step 1530, in response to determining that the game item is not in the target sub-area according to the position information of the game item, the current game is switched to a halt state.
[189] Specifically, when the target sub-area corresponding to the current detection result is determined, whether the game item corresponding to the detection result is in the target sub-area can be determined according to the position information of the detection result. If the determination result is yes, it represents that no errors occur in the distribution of game items; and if the determination result is no, it represents that the distribution of game items is abnormal and the current game needs to be interrupted.
[190] Still taking the above example scenario as an example, where it is determined that the target sub-area corresponding to a current detection result of the second game item is the sub-area 112. Whether the second game item is located within the sub-area 112 is determined based on the position information in the detection result by the detection system. If the determination result is yes, it represents that no errors occur in the distributing order of the second game item. If the determination result is no, it represents that the distributing order of the second game item is abnormal, and then the detection system can switch the current game to the halt state, and send an alarm message to the background at the same time.
[191] It can be seen from the above that in the implementations of the present disclosure, the distributing order of a game item is determined through the pre-established correspondence between sequential information and sub-areas, so as to improve the reliability of the system.
[192] In some implementations, determining whether the game item is in the operation completed state further includes: in response to that the game item is moved to the game item operating area, the first position information of the game item and the second position information of the operation object for the game item are detected.
[193] Specifically, as shown in FIG. 2, the game area includes the game item operating area 110, during the game, both a dealer and a player perform relevant operations on one or more game items within the game item operating area 110. Therefore, in the implementations of the present disclosure, the detection system can only 1a perform detection and recognition on game items within the game item operating area 110.
[194] For example, in the scenario shown in FIG. 5, when a game item is taken out from a game item depository box 720 by a dealer, the detection system does not execute steps of the above-mentioned implementation until the game item is moved into the game item operating area 110, which avoids interference brought by a game item outside the game item operating area 110, and improves the stability of the system.
[195] It can be seen from the above that, in the method of detecting a game item provided by the implementations of the present disclosure, the detection on game scenes can be realized based on computer vision, and automatic switching of game states can be realized based on a video stream involving the game process, which improves the stability and reliability of the system. At the same time, switching between states is realized based on data regarding the game item in a stable state, thereby improving the accuracy of the detection.
[196] In some implementations, the implementations of the present disclosure provide an apparatus for detecting a game item. As shown in FIG. 16, the detection apparatus exemplified in the present disclosure includes: a first detecting module 10, configured to detect, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area, and second position information of an operation object for the game item; and a first determining module 20, configured to determine, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state.
[197] It can be seen from the above that, by realizing the detection on game scenes based on computer vision and thus detecting a stable state of a game item, the apparatus of the present disclosure can effectively decrease the interference of motion or being sheltered on the recognition for the game item, which is beneficial to improve the accuracy of subsequent recognition task(s).
[198] In some implementations, the first detecting module 10 is specifically configured to obtain a first boundary box of the game item within the game area and a second boundary box of the operation object by performing a detection on the video stream.
[199] In some implementations, the first determining module 20 is specifically configured to: determine, in response to that there is no overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is separating from each other; and/or determine, in response to that there is an overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is overlapping with each other.
[200] In some implementations, the first determining module 20 is specifically configured to: for each game item involved in the video stream of the game area, determine that the game item is in the operation completed state at a time corresponding to a first video frame in the video stream, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream.
[201] In some implementations, the first determining module 20 is specifically configured to obtain third position information of the game item in the preceding frame, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream; and determine, in response to that the first position information and the third position information meet a predetermined position condition, that the game item is in the operation completed state at the time corresponding to the first video frame.
[202] In some implementations, the first detecting module 10 is specifically configured to determine whether the operation object for the game item is a target operation object based on the video stream; and detect, in response to that the operation object for the game item is the target operation object, the first position information of the game item and the second position information of the target operation object.
[203] In some implementations, the first detecting module 10 is specifically configured to determine, based on the video stream, a direction of operating the game item by the operation object; and determine, in response to that the direction of operating is a predetermined direction, that the operation object is the target operation object.
[204] In some implementations, the first detecting module 10 is specifically configured to detect an operation object correlated with the game item based on the video stream; detect a face object correlated with the operation object based on the video stream; and determine, in response to that the face object is a predetermined face object, that the operation object for the game item is the target operation object.
[205] As shown in FIG. 17, in some implementations, the detection apparatus provided in the present disclosure further includes a second detecting module 30, which is configured to obtain, by performing a detection on each video frame in the video stream sequentially, a current detection result of each game item that is in the operation completed state in the video frame; a cache updating module 40, which is configured to update a historical detection result stored in a cache based on a comparison between the current detection result and the historical detection result in the cache; and a game state switching module 50, which is configured to switch the game to a result processing state in a case that one or more detection results in the cache meet a predetermined condition.
[206] In some implementations, the detection result includes position information of a game item, and the cache updating module is specifically configured to determine, according to a comparison between position information of the game item in the current detection result and each position information in the historical detection result, whether the game item is a newly-appearing game item; and store, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache.
[207] In some implementations, the cache updating module 40 is specifically configured to store, in response to that there is no historical detection result in the cache, the current detection result into the cache and switching the current game to a game item distributing state.
[208] In some implementations, the cache updating module 40 is specifically configured to store, in response to that the game item is a newly-appearing game item and the current game is in the game item distributing state, the detection result of the newly-appearing game item into the cache.
[209] In some implementations, the detection result includes identification information of a game item, and the game state switching module is specifically configured to obtain, for each of the one or more detection results in the cache, identification information of a game item involved in the detection result in the cache; determine whether a game processing result can be derived from the identification information obtained for each of the one or more detection results; and determine, in response to that a game processing result can be derived from the identification information obtained for each of the one or more detection results, that the one or more detection results meet the predetermined condition.
[210] In some implementations, the game state switching module 50 is specifically configured to before determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results, execute, in response to that a number of the one or more detection results stored in the cache reaches a predetermined number, the step of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results.
[211] As shown in FIG. 18, in some implementations, the detection result includes position information of a game item, and the detection apparatus provided in the present disclosure further includes an acquiring module , configured to acquire sequential information of the cache storing each detection result in the cache; a second determining module 70, configured to determine a target sub-area corresponding to the detection result based on the sequential information and a pre-established correspondence between sequential information of game items and a plurality of sub-areas within the game area; and the game state switching module is configured to switch the current game to a halt state, in response to determining that the game item is not in the target sub-area according to the position information of the game item.
[212] In some implementations, the game area includes a game item operating area, and the first detecting module 10 is specifically configured to detect, in response to that the game item is moved to the game item operating area, the first position information of the game item and the second position information of the operation object for the game item.
[213] It can be seen from the above that, in the apparatus for detecting a game item provided by the implementations of the present disclosure, the detection on game scenes can be realized based on computer vision, and automatic switching of game states can be realized based on a video stream involving the game process, which improves the stability and reliability of the system. At the same time, switching between states is realized based on data regarding the game item in a stable state, thereby improving the accuracy of the detection.
[214] The present disclosure provides a device for detecting a game item. In some implementations, the device for detecting the game item includes a memory and a processor storing computer instructions that can be read by the processor, and the processor, when reading the computer instructions, is caused to execute the method according to any one of the above-mentioned implementations.
[215] In some implementations, the above-mentioned detection device can be a computing device with data processing capability deployed in a game place, and the processor therein can run the above-mentioned detection method to realize the detection of the state of the game item in the game place.
[216] The present disclosure provides a system for detecting a game item. In some implementations, the system includes an image capture device, configured to acquire video stream of a game playing in a game area; a processor, connected with the image capture device to obtain the video stream of the game; and a processor storing computer instructions that can be read by the processor. The processor, when reading the computer instructions, is caused to execute the method according to any one of the above-mentioned implementations.
[217] The detection device and detection system provided by the implementations of the present disclosure can be implemented with reference to the foregoing embodiments, and will not be repeated herein.
[218] The present disclosure provides a non-transitory computer readable storage medium, which stores computer readable instructions for causing the computer to execute the method according to any one of the above-mentioned implementations.
[219] Specifically, FIG. 19 shows a schematic structural diagram of a computer system 600 applicable to 16r perform the method of the present disclosure. Through the system shown in FIG. 19, the corresponding functions of the processor and storage medium can be realized.
[220] As shown in FIG. 19, the computer system 600 includes a processor 601, which can perform various appropriate actions and processes according to a program stored in the memory 602 or a program loaded into the memory 602 from a storage section 608. In the memory 602, various programs and data required for the operation of the system 600 are also stored. The processor 601 and the memory 602 are connected to each other via a bus 604. An input/output (1/O) interface 605 is also connected to the bus 604.
[221] The following components are connected to the 1/O interface 605: an input section 606 including a keyboard, a mouse, or the like; an output section 607 including a cathode ray tube (CRT), a liquid crystal display (LCD), and a speaker, or the like; a storage section 608 including a hard disk, or the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet. The driver 610 is also connected to the 1/O interface 605 as required. A removable medium 611, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed on the drive 610 as required, so that the computer program read from it is installed into the storage section 608 as required.
[222] In particular, according to the implementations of the present disclosure, the above method process can be implemented as a computer software program. For example, the implementations of the present disclosure include a computer program product, which includes a computer program tangibly embodied on a machine-readable medium, and the computer program includes program code for executing the above method. In such an implementation, the computer program can be downloaded and installed from the network through the communication section 609, and/or installed from the removable medium 611.
[223] The flowcharts and block diagrams in the accompanying drawings illustrate the possible implementation architecture, functions, and operations of the system, method, and computer program product according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagram can represent a module, program segment, or part of a code, and the module, program segment, or part of the code contains one or more executable instructions for realizing the specified logic function. It should also be noted that, in some alternative implementations, the functions marked in the block can also take place in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or it can be implemented by a combination of dedicated hardware and computer instructions.
[224] Apparently, the above implementations are merely examples for clear description, and are not intended to limit the implementations. For those of ordinary skill in the art, other changes or modifications in different forms can be made on the basis of the above description. It is not necessary and impossible to list all the implementations here. The obvious changes or changes derived from this are still within the scope of protection created by the present disclosure.

Claims (21)

  1. CLAIMS 1. A method of detecting a game item, the method comprising: detecting, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area and second position information of an operation object for the game item; and determining, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state.
  2. 2. The method according to claim 1, wherein detecting the first position information of the game item within the game area and the second position information of the operation object for the game item comprises: obtaining a first boundary box of the game item within the game area and a second boundary box of the operation object by performing a detection on the video stream.
  3. 3. The method according to claim 2, further comprising at least one of determining, in response to that there is no overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is separating from each other; or determining, in response to that there is an overlapping area between the first boundary box and the second boundary box, that the positional relationship between the game item and the operation object is overlapping with each other.
  4. 4. The method according to any one of claims 1 to 3, wherein determining, in response to determining that the positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in the operation completed state comprises: for each game item involved in the video stream of the game area, determining that the game item is in the operation completed state at a time corresponding to a first video frame in the video stream, in response to that the game item and the operation object are detected as separating from each other in the first video frame, and the game item and the operation object are detected as overlapping with each other in a preceding frame of the first video frame in the video stream.
  5. 5. The method according to claim 4, wherein determining that the game item is in the operation completed state at the time corresponding to the first video frame comprises: obtaining third position information of the game item in the preceding frame; and determining, in response to that the first position information and the third position information meet a predetermined position condition, that the game item is in the operation completed state at the time corresponding to the first video frame.
  6. 6. The method according to any one of claims 1 to 5, wherein detecting the first position information of the game item within the game area and the second position information of the operation object for the game item comprises: determining, based on the video stream, whether the operation object for the game item is a target operation object; and detecting, in response to that the operation object for the game item is the target operation object, the first position information of the game item and the second position information of the target operation object.
  7. 7. The method according to claim 6, wherein determining, based on the video stream, whether the operation object for the game item is the target operation object comprises: determining, based on the video stream, a direction of operating the game item by the operation object; and determining, in response to that the direction of operating is a predetermined direction, that the operation object is the target operation object.
  8. 8. The method according to claim 6, wherein determining, based on the video stream, whether the operation object for the game item is the target operation object comprises: detecting, based on the video stream, an operation object correlated with the game item; detecting, based on the video stream, a face object correlated with the operation object; and determining, in response to that the face object is a predetermined face object, that the operation object for the game item is the target operation object.
  9. 9. The method according to any one of claims 1 to 8, further comprising: obtaining, by performing a detection on each video frame in the video stream sequentially, a current detection result of each game item that is in the operation completed state in the video frame; updating a historical detection result stored in a cache based on a comparison between the current detection result and the historical detection result in the cache; and switching, in a case that one or more detection results in the cache meet a predetermined condition, the game to a result processing state.
  10. 10. The method according to claim 9, wherein the detection result comprises position information of a game item, updating the historical detection result in the cache based on the comparison between the current detection result and the historical detection result in the cache comprises: determining, according to a comparison between position information of the game item in the current 1R detection result and each position information in the historical detection result, whether the game item is a newly-appearing game item; and storing, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache.
  11. 11. The method according to claim 10, wherein updating the historical detection result in the cache based on the comparison between the current detection result and the historical detection result stored in the cache further comprises: storing, in response to that there is no historical detection result in the cache, the current detection result into the cache and switching the current game to a game item distributing state.
  12. 12. The method according to claim 11, wherein storing, in response to that the game item is a newly-appearing game item, the detection result of the newly-appearing game item into the cache comprises: storing, in response to that the game item is a newly-appearing game item and the current game is in the game item distributing state, the detection result of the newly-appearing game item into the cache.
  13. 13. The method according to any one of claims 9 to 12, wherein the detection result comprises identification information of a game item; the determining that the one or more detection results in the cache meet the predetermined condition comprises: obtaining, for each of the one or more detection results in the cache, identification information of a game item involved in the detection result in the cache; determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results; and determining, in response to that a game processing result can be derived from the identification information obtained for each of the one or more detection results, that the one or more detection results meet the predetermined condition.
  14. 14. The method according to claim 13, before determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results, the method further comprising: executing, in response to that a number of the one or more detection results stored in the cache reaches a predetermined number, the step of determining whether a game processing result can be derived from the identification information obtained for each of the one or more detection results.
  15. 15. The method according to any one of claims 9 to 14, wherein the detection result comprises position information of a game item, the method further comprising: acquiring sequential information of storing each detection result in the cache; determining a target sub-area corresponding to the detection result based on the sequential information and a pre-established correspondence between sequential information of game items and a plurality of sub-areas within the game area; switching the current game to a halt state, in response to determining that the game item is not in the target sub-area according to the position information of the game item.
  16. 16. The method according to claim 1, wherein the game area comprises a game item operating area, detecting the first position information of the game item within the game area and the second position information of the operation object for the game item comprises: detecting, in response to that the game item is moved to the game item operating area, the first position information of the game item and the second position information of the operation object for the game item.
  17. 17. An apparatus for detecting a game item, comprising: a first detecting module, configured to detect, based on an acquired video stream of a game playing in a game area, first position information of a game item within the game area, and second position information of an operation object for the game item; and a first determining module, configured to determine, in response to determining that a positional relationship between the game item and the operation object is changed from overlapping with each other to separating from each other based on the first position information and the second position information, that the game item is in an operation completed state.
  18. 18. A device for detecting a game item, comprising: a processor, and a memory storing computer instructions that can be read by the processor, and the processor, when reading the computer instructions, is caused to execute the method according to any one of claims 1 to 16.
  19. 19. A system for detecting a game item, comprising: an image capture device, configured to acquire video stream of a game playing in a game area; a processor, connected with the image capture device to obtain the video stream of the game; and a memory storing computer instructions readable by the processor, and when the computer instructions are read, the processor executes the method according to any one of claims 1 to 16.
  20. 20. A non-transitory computer readable storage medium, storing computer readable instructions for causing the computer to perform the method according to any one of claims 1 to 16.
  21. 21. A computer program, comprising computer-readable codes which, when executed in an electronic device, cause a processor in the electronic device to perform the method of any one of claims 1 to 16.
AU2021240274A 2021-09-13 2021-09-24 Methods, apparatuses, devices, systems and storage media for detecting game items Abandoned AU2021240274A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10202110070U 2021-09-13
SG10202110070U 2021-09-13
PCT/IB2021/058727 WO2023037157A1 (en) 2021-09-13 2021-09-24 Methods, apparatuses, devices, systems and storage media for detecting game items

Publications (1)

Publication Number Publication Date
AU2021240274A1 true AU2021240274A1 (en) 2023-03-30

Family

ID=79932615

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021240274A Abandoned AU2021240274A1 (en) 2021-09-13 2021-09-24 Methods, apparatuses, devices, systems and storage media for detecting game items

Country Status (2)

Country Link
CN (1) CN114008673A (en)
AU (1) AU2021240274A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063739B (en) * 2022-06-10 2023-06-16 嘉洋智慧安全科技(北京)股份有限公司 Abnormal behavior detection method, device, equipment and computer storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8016665B2 (en) * 2005-05-03 2011-09-13 Tangam Technologies Inc. Table game tracking
US20070111773A1 (en) * 2005-11-15 2007-05-17 Tangam Technologies Inc. Automated tracking of playing cards
US10176456B2 (en) * 2013-06-26 2019-01-08 Amazon Technologies, Inc. Transitioning items from a materials handling facility
SG11201800927TA (en) * 2015-08-03 2018-03-28 Angel Playing Cards Co Ltd Fraud detection system in casino
CA3024336A1 (en) * 2016-05-16 2017-11-23 Sensen Networks Group Pty Ltd System and method for automated table game activity recognition
JP6523506B2 (en) * 2018-03-12 2019-06-05 株式会社コナミデジタルエンタテインメント Game control device, game system and program
AU2019240623A1 (en) * 2018-10-05 2020-04-23 Aristocrat Technologies Australia Pty Limited System and method for managing digital wallets
US20220375300A1 (en) * 2019-01-31 2022-11-24 Angel Group Co., Ltd. Management system
CN111068323B (en) * 2019-12-20 2023-08-22 腾讯科技(深圳)有限公司 Intelligent speed detection method, intelligent speed detection device, computer equipment and storage medium
SG10201913152SA (en) * 2019-12-24 2021-07-29 Sensetime Int Pte Ltd Method And Apparatus For Detecting Dealing Sequence, Storage Medium And Electronic Device
KR20220169466A (en) * 2021-06-18 2022-12-27 센스타임 인터내셔널 피티이. 리미티드. Methods and devices for controlling game states
KR20230000923A (en) * 2021-06-24 2023-01-03 센스타임 인터내셔널 피티이. 리미티드. game monitoring

Also Published As

Publication number Publication date
CN114008673A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN111145214A (en) Target tracking method, device, terminal equipment and medium
JP7416782B2 (en) Image processing methods, electronic devices, storage media and computer programs
US11461997B2 (en) Matching method and apparatus, electronic device, computer-readable storage medium, and computer program
MacLean et al. Fast hand gesture recognition for real-time teleconferencing applications
US20170100661A1 (en) Vision system for monitoring board games and method thereof
US20230326256A1 (en) Identity recognition method, computer apparatus, non-transitory computer-readable storage medium
CN109977824A (en) Article picks and places recognition methods, device and equipment
WO2023273344A1 (en) Vehicle line crossing recognition method and apparatus, electronic device, and storage medium
CN111553234A (en) Pedestrian tracking method and device integrating human face features and Re-ID feature sorting
AU2021240274A1 (en) Methods, apparatuses, devices, systems and storage media for detecting game items
AU2021204586A1 (en) Methods, apparatuses, devices and storage media for switching states of tabletop games
CN111986229A (en) Video target detection method, device and computer system
KR20220169466A (en) Methods and devices for controlling game states
WO2023037157A1 (en) Methods, apparatuses, devices, systems and storage media for detecting game items
CN112489450B (en) Traffic intersection vehicle flow control method, road side equipment and cloud control platform
CN115222778A (en) Moving object detection method and device based on optical flow, electronic device and medium
CN112686941B (en) Method and device for recognizing rationality of movement track of vehicle and electronic equipment
AU2021240276A1 (en) Methods, apparatuses, devices and storage media for switching states of card games
WO2022263903A1 (en) Methods and apparatuses for controlling game states
CN113728326A (en) Game monitoring
CN113508392B (en) Processing method, device, system, equipment and storage medium for abnormal event
US20220415118A1 (en) Methods of detecting game prop operation event and devices and systems thereof
CN114727147B (en) Video recording method and device
WO2022269327A1 (en) Methods of detecting game prop operation event and apparatuses, devices and systems thereof
JP2018036894A (en) Image extraction device, face collation system, player management system, image extraction method, and program

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted