CN114179100A - Method and device for playing chess, chess playing robot and computer storage medium - Google Patents

Method and device for playing chess, chess playing robot and computer storage medium Download PDF

Info

Publication number
CN114179100A
CN114179100A CN202111449490.XA CN202111449490A CN114179100A CN 114179100 A CN114179100 A CN 114179100A CN 202111449490 A CN202111449490 A CN 202111449490A CN 114179100 A CN114179100 A CN 114179100A
Authority
CN
China
Prior art keywords
playing
target
chess
information
execution information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111449490.XA
Other languages
Chinese (zh)
Inventor
周茗岩
刘锦金
唐明勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111449490.XA priority Critical patent/CN114179100A/en
Publication of CN114179100A publication Critical patent/CN114179100A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/003Manipulators for entertainment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Toys (AREA)

Abstract

The disclosed embodiment provides a playing method and device, a playing robot and a storage medium, wherein the method is applied to the playing robot and comprises the following steps: receiving a target information instruction in response to target interaction operation which is executed by a target interaction object and meets a preset condition; determining execution information matched with the target interactive operation according to the target information instruction; and carrying out target playing operation with the target interaction object according to the execution information.

Description

Method and device for playing chess, chess playing robot and computer storage medium
Technical Field
The present disclosure relates to, but not limited to, the field of playing robots, and in particular, to a playing method and apparatus, a playing robot, and a computer storage medium.
Background
At present, chess games become a fancy game which is very popular among people, the chess game has multiple changes and infinite interest, and the chess game is also very beneficial to intelligence development and ceramic temperament. With the development of computers and robotics, robots capable of playing games with players have come to have great market demands.
In the related art, the basic function of the playing robot is playing chess generally, and the interactivity between the game player and the robot in the playing process is relatively lacking.
Disclosure of Invention
The disclosed embodiment provides a playing method and device, a playing robot and a computer storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the disclosed embodiment provides a playing method, which is applied to playing robots and comprises the following steps:
receiving a target information instruction in response to target interaction operation which is executed by a target interaction object and meets a preset condition;
determining execution information matched with the target interactive operation according to the target information instruction;
and carrying out target playing operation with the target interaction object according to the execution information.
The disclosed embodiment provides a device of playing chess, is applied to the robot of playing chess, the device of playing chess includes:
the receiving module is used for responding to target interaction operation which is executed by a target interaction object and meets a preset condition, and receiving a target information instruction;
the first determining module is used for determining execution information matched with the target interactive operation according to the target information instruction;
and the first processing module is used for carrying out playing processing on the target interaction object according to the execution information.
The disclosed embodiment provides a playing robot, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the computer program to realize the steps in the playing method.
The disclosed embodiments provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the steps in the above-described playing method.
In the embodiment of the disclosure, a target information instruction is received through a target interaction operation which is executed in response to a target interaction object and meets a preset condition; determining execution information matched with the target interactive operation according to the target information instruction; and carrying out target playing operation with the target interaction object according to the execution information. Therefore, the interaction between the playing robots and the users can be realized through target interaction operation, the man-machine interaction function of the playing robots is realized, meanwhile, the man-machine playing process becomes richer and more friendly, and further the interaction experience of the users is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow chart of an implementation of a playing method provided by the embodiment of the present disclosure;
FIG. 2 is a schematic view of an implementation flow of a playing method provided by the embodiment of the present disclosure;
FIG. 3 is a schematic view of an implementation flow of a playing method provided by the embodiment of the present disclosure;
FIG. 4 is a schematic view of an implementation flow of a playing method provided by the embodiment of the present disclosure;
FIG. 5 is a schematic view of an implementation flow of a playing method provided by the embodiment of the present disclosure;
FIG. 6A is a schematic diagram of a playing interaction system provided by an embodiment of the present disclosure;
FIG. 6B is a schematic diagram of a playing interaction process provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a playing device provided in the embodiment of the present disclosure;
fig. 8 is a hardware entity diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, references to the terms "first \ second \ third" are intended merely to distinguish similar objects and do not denote a particular order, but rather are to be understood that "first \ second \ third" may, where permissible, be interchanged in a particular order or sequence so that embodiments of the disclosure described herein can be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
At present, chess games become a fancy game which is very popular among people, the chess game has multiple changes and infinite interest, and the chess game is also very beneficial to intelligence development and ceramic temperament. With the development of computers and robotics, robots capable of playing games with players have come to have great market demands.
In the related art, the basic function of the playing robots is playing chess generally, and considering that the playing process is a process requiring a game player to perform human-computer interaction with the robots, the embodiment of the human-computer interaction is various, for example: audio, visual, tactile, interactive interface, and the like. The interactive interface display depends on the display screen of the embedded system for displaying, but if the user interactive interface is required to achieve practical expected effect and the development period is long, the user interactive interface is not the most ideal choice. Simple light can assist the user to play chess, and can also achieve simple and reliable effects. The robot can trigger some specific actions in some form at a proper time, but in practice, the mechanical arm still needs to play chess, which is the most important task, so that too many actions lead to the fact that the human-computer interaction of the robot is too complicated, and the whole use of the robot is not facilitated. Therefore, the playing robots need to support more perfect man-machine interaction functions, and then the man-machine playing process becomes richer and more friendly.
The embodiment of the disclosure provides a chess playing method, which can realize the interaction between the chess playing robots and users through target interaction operation, realize the man-machine interaction function of the chess playing robots, and further make the man-machine chess playing process richer and more friendly. An exemplary application of the computer device set forth in the embodiments of the present disclosure is described below. The computer device provided by the embodiment of the disclosure can be implemented as a mobile phone terminal, a notebook computer, a tablet computer, a chess playing robot, other online chess playing platforms, a server, a desktop computer, an intelligent television, a vehicle-mounted device, an industrial device and the like.
In the following, the technical solutions in the embodiments of the present disclosure will be clearly and completely described with reference to the drawings in the embodiments of the present disclosure.
Fig. 1 is a schematic flow chart of an implementation of a playing method provided in an embodiment of the present disclosure, where the method may be executed by a playing robot, and as shown in fig. 1, the playing method includes:
and step S11, responding to the target interactive operation which is executed by the target interactive object and meets the preset condition, and receiving a target information instruction.
Here, the target interaction object may include, but is not limited to, a human, a machine, and the like. The target interactive operation may include, but is not limited to, a key operation, a gesture operation, a voice operation, a somatosensory operation, and the like. The target information instruction includes, but is not limited to, a chess-opening instruction, a chess-playing instruction, a chess-repenting instruction, and the like. The preset conditions include, but are not limited to, whether key operation is matched with the prompt information, whether the pressure value and/or pressing duration of the key meet threshold conditions, whether the gesture is a closed graph, whether the angle, acceleration, speed, displacement and/or amplitude of the gesture meet threshold conditions, and the like.
And step S12, determining the execution information matched with the target interactive operation according to the target information instruction.
Here, the execution information includes, but is not limited to, receiving an opening information command, performing a chess move information command, controlling a signal lamp to switch an operation state, playing audio, and the like.
In some embodiments, a mapping table of the correspondence between the target information instruction and the execution information may be established in advance, and the mapping table may be stored in the playing robot or other terminal. And when the mapping table is stored in the playing robots, the playing robots determine the execution information matched with the target interactive operation in the mapping table according to the target information command. And under the condition that the mapping table is stored in other terminals, the playing robot sends the target information instruction to the other terminals, so that the other terminals determine the execution information matched with the target interactive operation in the mapping table according to the target information instruction, and return the corresponding execution information to the playing robot.
For example, when the user presses an option key, the playing robot receives a playing game opening instruction, and searches the mapping table for execution information matching the playing game opening instruction as a receiving opening information instruction based on the playing game opening instruction. For example, when the user presses the go key, the playing robot receives the playing command, and searches the map for execution information matching the playing command as an information command to play, based on the playing command.
And step S13, performing a target playing operation with the target interaction object according to the execution information.
Here, the target play operation includes, but is not limited to, at least one of controlling a signal lamp to perform switching of an operating state, playing a corresponding audio, controlling a playing robot to perform switching of a playing state, and the like. The playing state includes, but is not limited to, a ready-to-play state, a playing end state, and the like. The switching of the operation state of the signal lamp includes, but is not limited to, switching from an disabled state to an enabled state, switching from one color to another color, switching from one blinking frequency to another blinking frequency, and the like.
For example, when the execution information is an information command for receiving an opening, the playing robot is controlled to enter a ready-to-play state, the control signal lamp is switched from an disabled state to an enabled state, and/or an audio indicating that the user performs a chess-playing operation is played.
In some embodiments, the method further comprises:
and step S14, determining a target playing result in response to the fact that the current playing game information meets the preset playing rules and meets the playing ending conditions.
Here, the preset playing rules may include, but are not limited to, whether the placement positions of the respective pieces are correct, whether there are pieces on the board, and the like. The playing end conditions may include, but are not limited to, winning or losing, time of play, etc. The target play results include, but are not limited to, user wins, robot wins, ties, and the like.
For example, in a gobang, when the current play information includes five white pieces of the user connected in line, it is determined that the user wins.
Step S15, acquiring fifth execution information based on the target playing result; and controlling a signal lamp to switch the working state according to the fifth execution information, and playing a sixth audio indicating the target game result.
Here, the fifth execution information may be stored in advance in the playing robots, and includes, but is not limited to, controlling signal lamps to switch operating states, playing audio, and the like. The operation state switching of the signal lamp may include, but is not limited to, switching from an illuminated state to a blinking state, switching from an illuminated state to an unlit state, switching from an unlit state to a blinking state, switching from an unlit state to an illuminated state, and the like. The sixth audio may be pre-stored in the playing robots. Wherein the sixth audio may include, but is not limited to, "congratulate, you won", "this local I-me is at risk for winning", "you really do you," etc.
For example, in the gobang, when the user wins, the control signal lamp is switched from the unlit state to the blinking state, and "you really are you".
In some embodiments, the controlling the signal lamp to switch the working state according to the fifth execution information includes: and controlling the first signal lamp to be switched from the first working state to the second working state according to the fifth execution information.
Here, in a case where the first signal lamp includes a blue lamp, the blue lamp is controlled to be switched from the disabled state to the used state, that is, the blue lamp is turned on.
In some embodiments, the method further comprises:
step S16, responding to the situation that the current chess playing information does not meet the preset playing rules, and acquiring sixth execution information; and controlling a signal lamp to switch the working state according to the sixth execution information, and playing a seventh audio indicating chess moving error alarm.
Here, the sixth execution information may be stored in advance in the playing robots, and includes, but is not limited to, controlling signal lamps to switch operating states, playing audio, and the like. The switching of the operation state of the signal lamp may include, but is not limited to, switching from an illuminated state to a blinking state, switching from an illuminated state to an unlit state, switching from an unlit state to a blinking state, switching from an unlit state to an illuminated state, switching from one color to another color, switching from one blinking frequency to another blinking frequency, and the like. The seventh audio may be stored in advance in the playing robot. Wherein, the seventh audio may include, but is not limited to, "chess pieces are wrong in color", "chess pieces are not placed at right positions", etc.
For example, in a gobang, when the user selects a black piece of the playing robot, the control signal lamp is switched from the unlit state to the blinking state, and "piece color error" is played.
In some embodiments, the signal lamp further includes a third signal lamp, and the controlling the signal lamp to switch the working state according to the sixth execution information includes:
and controlling a third signal lamp to be switched from the first working state to the second working state according to the sixth execution information, wherein the third signal lamp is used for giving an alarm for chess playing errors.
In some embodiments, the first signal lamp, the second signal lamp and the third signal lamp display different colors of light when in the second operating state. For example, the first signal lamp, the second signal lamp and the third signal lamp respectively display blue, green and red when in the second working state.
In the embodiment of the disclosure, a target information instruction is received through a target interaction operation which is executed in response to a target interaction object and meets a preset condition; determining execution information matched with the target interactive operation according to the target information instruction; and carrying out target playing operation with the target interaction object according to the execution information. Therefore, the interaction between the playing robots and the users can be realized through target interaction operation, the man-machine interaction function of the playing robots is realized, meanwhile, the man-machine playing process becomes richer and more friendly, and further the interaction experience of the users is improved.
Fig. 2 is a schematic flow chart of an implementation of a playing method provided in the embodiment of the present disclosure, where the method may be executed by a playing robot, and as shown in fig. 2, the playing method includes:
and step S21, collecting the current scene information of the target playing area.
Here, the playing robot includes a collecting device for collecting scene information of the target playing area. The target playing area can be a playing area in a virtual chessboard or a playing area in a solid chessboard.
In some embodiments, the acquisition device may include a camera.
And step S22, carrying out face detection processing on the current scene information to obtain a face detection result.
Here, at least one image may be captured by the capturing device, and the captured at least one image may be subjected to face detection processing to obtain a face detection result. The face detection process includes, but is not limited to, face detection through a neural network. In implementation, a person skilled in the art may select an appropriate manner to implement face detection according to actual requirements, which is not limited herein.
In some embodiments, since the background of a complex scene such as backlight, highlight, dim light, etc. is too prominent to cause difficulty in face detection, the image may be preprocessed before face detection is performed on the image, so as to improve the accuracy of face detection. The preprocessing includes, but is not limited to, performing light adjustment, performing illumination transformation, histogram equalization, and the like on the image.
And step S23, determining the interaction information matched with the target interaction object based on the face detection result.
Here, the face detection result includes the presence of an interactive object and the absence of an interactive object. The interactive information includes but is not limited to playing corresponding audio, controlling signal lamps to switch working states, controlling the playing robots to switch playing states, and the like. For example, in the case that the face detection result includes the presence of the interactive object, a blue light is controlled to blink, and the blue light blinks are used for instructing the user to press an enter key. For example, in the case where the face detection result includes the presence of the interactive object, the mechanical arm of the playing robot is controlled to enter the ready-to-play state.
In some embodiments, the step S23 includes:
responding to the face detection result that the target interaction object exists, and playing a first audio indicating opening operation; and/or responding to the human face detection result that the target interaction object does not exist, and playing second audio for stopping the opening operation.
Here, the first audio and the second audio may be stored in advance in the playing robot. Wherein the first audio includes, but is not limited to, "do you get to play a game? "," valued players, we play a game, etc. The second audio includes, but is not limited to, "no player found", "on the rest, waiting for player", etc.
For example, the playing robot detects a face of a captured image, and plays "respected player, and we play" a game when the face of the captured image is detected.
And step S24, responding to the target interactive operation which is executed by the target interactive object and meets the preset condition, and receiving a target information instruction.
And step S25, determining the execution information matched with the target interactive operation according to the target information instruction.
And step S26, performing a target playing operation with the target interaction object according to the execution information.
The steps S24 to S26 correspond to the steps S11 to S13, and in practice, reference may be made to the embodiments of the steps S11 to S13.
In the embodiment of the disclosure, before opening, the playing robot collects the current scene information in real time, and performs face detection on the current scene information, so as to determine the corresponding interaction information according to the face detection result. Therefore, the initialization of playing chess is realized quickly, the process of playing chess by human-computer becomes richer and more friendly, and the playing experience of the user can be improved.
Fig. 3 is a schematic flow chart of an implementation of a playing method provided in the embodiment of the present disclosure, where the method may be executed by a playing robot, and as shown in fig. 3, the playing method includes:
and step S31, responding to the touch operation of the first key executed by the target interactive object, and receiving a playing starting instruction.
Here, the first button is used to indicate a button for entering an opening, and may be a physical button or a virtual button. In practice, the first key may include, but is not limited to, a start key, an open key, and the like. For example, in the case where the first key is an actual open key, when the user presses the open key, the playing robot receives the playing open command. For example, when the first key is a virtual start key, and the user presses the start key on the display screen of the playing robot, the playing robot receives the playing start command. For example, when the first key is a virtual open key, the user scans the identification code on the playing robot to establish a connection relationship with the playing robot, receives the playing interface returned by the playing robot, and when the user presses the open key in the playing interface, sends a playing open command to the playing robot. In implementation, those skilled in the art may select an appropriate manner to receive the playing game command according to actual requirements, which is not limited herein.
And step S32, determining first execution information matched with the touch operation of the first key according to the playing starting instruction.
Here, the first execution information includes, but is not limited to, receiving an opening information instruction, acquiring current user information, controlling a signal lamp to switch an operating state, playing audio, and the like.
And step S33, collecting current chess game information.
Here, the playing robot includes a collection device for collecting current playing game information of the target playing area. In some embodiments, the acquisition device may include a camera.
And step S34, responding to the situation that the current chess playing information meets the preset chess playing rules, controlling a signal lamp to switch the working state according to the first execution information, and playing a third audio for indicating the target interaction object to carry out chess playing operation.
Here, the preset playing rules may include, but are not limited to, whether the placement positions of the respective pieces are correct, whether there are pieces on the board, and the like. The switching of the operation state of the signal lamp may include, but is not limited to, switching from a non-lit state to a lit state, switching from a non-lit state to a blinking state, switching from a lit state to a unlit state, switching from a lit state to a blinking state, switching from one color to another color, switching from one blinking frequency to another blinking frequency, and the like. The third audio may be pre-stored in the playing robots. Wherein, the third audio may include, but is not limited to, "start playing, please go", "respect player, go by you now", etc.
For example, in a gobang, when the playing robot detects that there are no black and white pieces on the board, the control signal lamp is switched from the unlit state to the blinking state, and "start playing and go.
In some embodiments, the signal lamp includes a first signal lamp and a second signal lamp, the operating state includes a first operating state and a second operating state, and the controlling signal lamp performs the switching of the operating state includes:
controlling the first signal lamp and the second signal lamp to be switched from the first working state to the second working state, wherein the first working state represents a non-enabled state, and the second working state represents an enabled state; the first signal lamp is used for indicating the chess playing starting state, and the second signal lamp is used for indicating the chess playing state of the target interaction object.
In some embodiments, the first signal lamp and the second signal lamp display different colors of light when in the second working state. For example, the first signal lamp and the second signal lamp respectively display blue and green when in the second working state.
In the embodiment of the disclosure, in the playing process, the interaction with the user is carried out through integrating multiple aspects of vision, hearing, touch and the like, so that the man-machine playing process becomes richer and more friendly, and the playing experience of the user is more comprehensive, interesting and efficient.
Fig. 4 is a schematic flow chart of an implementation process of a playing method provided in the embodiment of the present disclosure, where the method may be executed by a playing robot, and as shown in fig. 4, the playing method includes:
and step S41, responding to the touch operation of the second key executed by the target interactive object after the chess playing operation is completed, and receiving a chess playing instruction.
Here, the second key is used to indicate a key for moving chess, and may be a physical key or a virtual key. In practice, the second key may include, but is not limited to, a move key, a confirm key, and the like. For example, in the case where the second key is a physical go key, when the user presses the go key, the playing robot receives a playing go command. For example, when the second key is a virtual enter key, the playing robot receives the playing command when the user presses the enter key on the display screen of the playing robot. For example, when the second key is a virtual go key, a play go command is transmitted to the playing robot when the user presses the go key on the playing interface. In implementation, those skilled in the art can select an appropriate manner to receive the playing command according to actual requirements, which is not limited herein.
And step S42, determining second execution information corresponding to the touch operation of the second key according to the playing chess instruction.
Here, the second execution information may include, but is not limited to, receiving an information instruction of moving chess, controlling a signal lamp to switch an operation state, playing audio, and the like.
And step S43, controlling the signal lamp to switch the working state according to the second execution information.
Here, the switching of the operation state of the signal lamp may include, but is not limited to, switching from a lit state to a blinking state, switching from a lit state to an unlit state, switching from an unlit state to a blinking state, switching from an unlit state to a lit state, switching from one color to another color, switching from one blinking frequency to another blinking frequency, and the like.
In some embodiments, the signal lamp includes a first signal lamp and a second signal lamp, and the operation state includes a first operation state and a second operation state, and the step S43 includes:
and controlling the first signal lamp and the second signal lamp to be switched from the second working state to the first working state according to the second execution information.
Here, the first signal lamp is configured to indicate a playing state of a game, and the second signal lamp is configured to indicate a playing state of the target interactive object. The first operating state represents an disabled state and the second operating state represents an enabled state.
For example, in the case where the first signal lamp and the second signal lamp include a blue lamp and a green lamp, respectively, both the blue lamp and the green lamp are switched from the lit state to the unlit state, that is, the green lamp is not lit.
And step S44, controlling the mechanical arm of the playing robot to carry out playing operation, and playing fourth audio indicating the category of the target playing piece.
Here, the target chess piece category may include, but is not limited to, a color category of the chess piece, a name of the chess piece, a position of the chess piece, and the like. The fourth audio may be pre-stored in the playing robots. Where the fourth audio may include, but is not limited to, "please wait, black to fifth row and column," "i select general," etc.
For example, in a gobang, when the playing robot controls the arm to move, the "please wait, black pieces to the fifth row of the fifth row" are played.
Step S45, responding to the end of the chess playing of the playing robots, and acquiring fourth execution information; and controlling the signal lamp to switch the working state according to the fourth execution information, and playing a third audio.
Here, the fourth execution information may be stored in advance in the playing robots, and includes, but is not limited to, controlling signal lamps to switch operating states, playing audio, and the like. The third audio may be pre-stored in the playing robots. Wherein, the third audio may include, but is not limited to, "start playing, please go", "respect player, go by you now", etc.
For example, in a gobang, when the playing robot finishes playing, the control signal light is switched from the unlit state to the blinking state, and "honored player, now playing by your chess" is played.
In some embodiments, the controlling the signal lamp to switch the working state according to the fourth execution information includes:
and controlling the second signal lamp to be switched from the first working state to the second working state according to the fourth execution information.
Here, in a case where the second signal lamp includes a green lamp, the green lamp is switched from the unlighted state to the lit state, that is, the green lamp is lit.
In some embodiments, the controlling the mechanical arm of the playing robot to perform a playing operation includes:
and step S441, collecting and analyzing current chess game information.
Here, the playing robot includes a collection device for collecting current playing game information of the target playing area. In some embodiments, the acquisition device may include a camera.
And step S442, determining the target playing pieces and target playing coordinate data corresponding to the target playing pieces in response to the fact that the current playing game information meets the preset playing rules and does not meet playing ending conditions.
Here, the play end condition may include, but is not limited to, a win or a loss, a game time, and the like. For example, in a gobang, the current play information does not include five pieces lined up.
And step S443, controlling the mechanical arm to carry out chess playing operation based on the target playing chess pieces and the target playing coordinate data.
Here, the playing robot completes the playing action based on the playing pieces and the coordinate data.
In the embodiment of the disclosure, in the playing process, the playing interaction is carried out with the user through integrating multiple aspects of vision, hearing, touch and the like, so that the man-machine playing process becomes richer and more friendly, and the playing experience of the user is more comprehensive, interesting and efficient.
Fig. 5 is a schematic flow chart of an implementation of a playing method provided in the embodiment of the present disclosure, where the method may be executed by a playing robot, and as shown in fig. 5, the playing method includes:
and step S51, responding to the touch operation of the third key executed by the target interactive object, and receiving a playing regret instruction.
Here, the third key is used for indicating a repent key, and may be a physical key or a virtual key. In practice, the third key may include, but is not limited to, a repen key, a return key, and the like. For example, in the case where the third key is a true regret key, when the user presses the regret key, the playing robot receives a playing regret instruction. For example, when the third key is a virtual return key, the playing robot receives the playing repent instruction when the user presses the return key on the display screen of the playing robot. For example, when the third key is a virtual regret key, the regret key in the playing interface is pressed by the user, and the playing regret is transmitted to the playing robot. In implementation, those skilled in the art may select an appropriate manner to implement the receiving of the playing regret instruction according to actual requirements, which is not limited herein.
And step S52, determining third execution information corresponding to the touch operation of the third key according to the playing regret instruction.
Here, the third execution information may include, but is not limited to, an information instruction to perform the repent, control of signal lights to perform the operation state switching, play audio, and the like.
And step S53, controlling a signal lamp to switch the working state according to the third execution information, and playing a fifth audio indicating that the target interactive object moves again.
Here, the switching of the operation state of the signal lamp may include, but is not limited to, switching from a lit state to a blinking state, switching from a lit state to an unlit state, switching from an unlit state to a blinking state, switching from an unlit state to a lit state, switching from one color to another color, switching from one blinking frequency to another blinking frequency, and the like. The fifth audio may be stored in advance in the playing robot. Wherein the fifth audio may include, but is not limited to, "respectful player, please go again", "regret success, please select appropriate chess piece", etc.
For example, in the case of gobang, when the user presses the regret key, the control signal light is switched from the unlit state to the blinking state, and "respected player please go again" is played.
In some embodiments, the signal lamp includes a second signal lamp, the operating state includes a first operating state and a second operating state, and the controlling the signal lamp to switch the operating state includes:
and controlling the second signal lamp to be in a first working state.
Here, the second signal lamp is configured to indicate a chess-moving state of the target interaction object, the first operating state represents an disabled state, and the second operating state represents an enabled state.
In implementation, the second signal lamp may be in the first working state or the second working state. And under the condition that the second signal lamp is in the second working state, controlling the second signal lamp to be switched from the second working state to the first working state.
In the embodiment of the disclosure, in the playing process, the playing interaction is carried out with the user through integrating multiple aspects of vision, hearing, touch and the like, so that the man-machine playing process becomes richer and more friendly, and the playing experience of the user is more comprehensive, interesting and efficient.
Fig. 6A is a schematic diagram of a playing interaction system provided in an embodiment of the present disclosure, and as shown in fig. 6A, the playing interaction system 60 includes a first interaction module 61 and a second interaction module 62. The first interaction module 61 is used for interaction between the playing robots and the users, and the second interaction module 62 is used for interaction between the playing robots and the developers.
In some embodiments, the first interaction module 61 includes an indicator light interaction module, a key interaction module, an audio interaction module, and a capture device interaction module.
And the indicator lamp interaction module is used for prompting the state information in the playing process.
In some embodiments, the indicator lamp interaction module may include, but is not limited to, a first indicator lamp, a second indicator lamp, and a third indicator lamp, each indicator lamp including an enabled state and a disabled state, wherein the use state of the first indicator lamp is used for indicating that the game is in the open state or indicating the result of playing the game, the use state of the second indicator lamp is used for indicating that the user can perform the state of playing the game, and the enabled state of the third indicator lamp is used for indicating the warning indication of the playing error of the playing robot. In some embodiments, the first indicator light, the second indicator light, and the third indicator light may be respectively represented by lights of different colors, such as: the first indicator light is a blue indicator light, the second indicator light is a green indicator light, and the third indicator light is a red indicator light. In the process of playing chess, the system can control the switching of the states of the indicator lamps in real time according to the playing command. For example, after the user presses the enter key, the system controls the first indicator light and the second indicator light to switch from the disabled state to the enabled state. As shown in fig. 6B, when the user 601 presses the open key K1, the system enters the Sa1 state, and a blue light (first indicator light) and a green light (second indicator light) are turned on. For another example, after the user presses the go key after moving the chess, the system controls the second indicator light to switch from the enabled state to the disabled state, and at this time, the first indicator light may be kept in the enabled state all the time or may be switched from the enabled state to the disabled state. As shown in fig. 6B, when the user 601 presses the go key K2, the system enters the Sa2 state, and the blue light (first indicator light) and the green light (second indicator light) are turned off. For another example, after the playing robot finishes playing chess, the system controls the second indicator light to be switched from the disabled state to the used state. As shown in fig. 6B, when the playing robot 602 plays chess, the system enters the Sa5 state, and the green light (second indicator light) is turned on to play the third audio. For example, when the playing robot has a mistake in playing chess, the system controls the third indicator light to be switched from the disabled state to the used state, and at this time, the first indicator light and the second indicator light may be kept in the existing state or switched from the existing state to the other state. As shown in fig. 6B, when the playing robot 602 fails to play chess, the system enters the Sa4 state, and the red light (third indicator light) is turned on.
And the key interaction module is used for realizing the interaction between the user and the playing robot.
In some embodiments, the key interaction module may include, but is not limited to, an open key, a move key, a repent key, wherein an open key press is used to characterize a game start, a move key press is used to characterize a user move complete, and a repent key press is used to characterize a user repent. In the playing process, the user can press the corresponding key according to actual requirements, and at the moment, the system controls the indicator lamp interaction module to switch the state of the corresponding indicator lamp and/or controls the audio interaction module to play the corresponding audio information. For example, when the user presses an option key, the chess game is characterized to be played, and at the moment, the system controls the first indicator light and the second indicator light to be switched from the non-enabled state to the enabled state and/or plays audio for indicating the user to go. As shown in fig. 6B, after the user 601 presses the open key K1, the system enters the Sa1 state, a blue light (first indicator light) is turned on, a green light (second indicator light) is turned on, and a third audio for instructing the user to move chess is played. For another example, when the user presses the go key, it indicates that the user has finished moving, and the system controls the second indicator light to switch from the enabled state to the disabled state, and/or plays the audio for moving the end. As shown in fig. 6B, when the user 601 presses the go key K2, the system enters the Sa2 state, and the blue light (first indicator light) and the green light (second indicator light) are turned off. As another example, after the user presses the repent key, then a repent for the user is characterized, at which time the system controls the second indicator light to be in an disabled state, and/or plays audio for the re-move. As shown in fig. 6B, when the user 601 presses the regrind key K3, the system enters the Sa3 state, and the green light (second indicator light) is turned off.
And the voice interaction module is used for representing voice prompt in the playing process.
In some embodiments, the voice interaction module includes, but is not limited to, various voice prompts, such as: the voice prompt of opening a game, the voice prompt of starting chess, the voice prompt of ending chess, the voice prompt of playing chess again, the voice prompt of reporting errors, the voice prompt of alarming, the voice prompt of chess playing results, the voice prompt of general, the voice prompt of face recognition results, the voice prompt of laugh, and the like. In practice, the voice prompt may be recorded in advance and triggered to play at a specific time. In the process of playing chess, the system can play corresponding voice prompt in real time according to the playing instruction and/or the playing state. For example, if the user presses the start key, the game play starts, and at this time, the system controls the voice interaction module to play the play start voice prompt. As shown in fig. 6B, after the user 601 presses the open key K1, the system enters the Sa1 state, a blue light (first indicator light) is turned on, a green light (second indicator light) is turned on, and a third audio for instructing the user to move chess is played. For another example, after the user presses the go key, the representation is used for the completion of the user's chess playing, and at this time, the system controls the voice interaction module to play a chess playing completion voice prompt. For another example, when playing a game ends, the system controls the voice interaction module to play the voice prompt of the target playing result. As shown in fig. 6B, when a game is finished, the system enters the Sa6 state, the blue light (first indicator light) is turned on, and the sixth audio is played. For example, when the playing robot goes to "general", the system controls the voice interaction module to play the voice prompt of the "general" two characters. As shown in fig. 6B, when the playing robot 602 moves and finds a game, the system enters the Sa5 state, and the green light (second indicator light) is turned on to play the third audio.
And the acquisition device interaction module is used for realizing the detection of the human face.
In some embodiments, the acquisition device interaction module is configured to acquire scene information of a playing area before playing a game, perform face detection processing on the current scene information by the system to obtain a face detection result, and control the voice interaction module to play a corresponding voice prompt and/or control a playing state of the playing robot and/or perform personalized interaction feedback setting according to the face detection result. For example, when a face is not detected in the scene information, a voice guidance instructing to stop the operation is played. For example, when a face is detected in the scene information, an opening voice prompt instructing an opening operation is played, and the playing robot is controlled to enter a ready-to-play state. For example, when a face is detected in the scene information, a voice prompt is played, which indicates that the user can individually set the interactive feedback of the playing robot.
In some embodiments, the second interaction module includes a vision module log module and a robot control unit log module.
And the visual module log module is used for printing important message information.
In some embodiments, the visual module log module comprises an artificial intelligence visual module, and the visual module continuously prints important messages in the execution process so as to facilitate debugging of developers. As shown in fig. 6B, when the playing robot 602 performs Sp2 operations, the messages of the chess are printed for the developer 603 to view, where the Sp2 operations may include executing chess or general.
And the log module of the robot control unit is used for transmitting log messages.
In some embodiments, in the playing process, the system transmits log messages of important steps through a Universal Synchronous/Asynchronous serial Receiver/Transmitter (USART), and when a playing robot has an error or runs a pause, debugging of a developer can be facilitated. For example, as shown in fig. 6B, when the playing robot 602 performs Sp1 operations, the error/alarm log messages are printed for the developer 603 to view, wherein the Sp1 operations include outputting errors/alarms. For another example, as shown in fig. 6B, when the playing robot 602 performs the Sp3 operation, the log message of the playing result is printed for the developer 603 to view, where the Sp3 operation may include outputting the playing result.
In the embodiment of the disclosure, a man-machine playing interaction system is constructed by integrating multiple modules such as vision, hearing, touch and the like, so that a playing robot can give more comprehensive, interesting and efficient interaction experience to a user, and meanwhile, printing of important message logs is provided, and developers can be helped to solve problems more conveniently.
Based on the above embodiments, the present disclosure provides a playing device, fig. 7 is a schematic diagram of a composition structure of the playing device provided by the present disclosure, and as shown in fig. 7, the playing device 70 includes a receiving module 71, a first determining module 72, and a first processing module 73.
A receiving module 71, configured to receive a target information instruction in response to a target interaction operation that is performed by a target interaction object and meets a preset condition;
a first determining module 72, configured to determine, according to the target information instruction, execution information matched with the target interactive operation;
and a first processing module 73, configured to perform playing processing with the target interaction object according to the execution information.
In some embodiments, the apparatus further comprises: the first acquisition submodule is used for acquiring current scene information of the target playing area; the first detection submodule is used for carrying out face detection processing on the current scene information to obtain a face detection result; and the first determining submodule is used for determining the interaction information matched with the target interaction object based on the face detection result.
In some embodiments, the first determining sub-module is further configured to: responding to the face detection result that the target interaction object exists, and playing a first audio indicating opening operation; and/or responding to the human face detection result that the target interaction object does not exist, and playing second audio for stopping the opening operation.
In some embodiments, the receiving module 71 further includes: the first receiving submodule is used for receiving a playing starting instruction in response to the touch operation of a first key executed by the target interactive object; the first determining module 72 further includes: the second determining submodule is used for determining first execution information matched with the touch operation of the first key according to the playing starting instruction; the first processing module 73 further includes: the second acquisition submodule is used for acquiring current chess game information; and the first processing submodule is used for responding that the current chess playing information meets the preset chess playing rule, controlling a signal lamp to switch the working state according to the first execution information, and playing a third audio frequency for indicating the target interactive object to carry out chess playing operation.
In some embodiments, the signal lamp includes a first signal lamp and a second signal lamp, the operating state includes a first operating state and a second operating state, and the first processing sub-module includes: controlling the first signal lamp and the second signal lamp to be switched from the first working state to the second working state, wherein the first working state represents a non-enabled state, and the second working state represents an enabled state; the first signal lamp is used for indicating the chess playing starting state, and the second signal lamp is used for indicating the chess playing state of the target interaction object.
In some embodiments, the receiving module 71 includes: the second receiving submodule is used for responding to the touch operation of a second key executed by the target interaction object after the chess playing operation is finished, and receiving a chess playing instruction; the first determining module 72 includes: the third determining submodule is used for determining second execution information corresponding to the touch operation of the second key according to the playing and chess playing instruction; the first processing module 73 includes: the second processing submodule is used for controlling the signal lamp to switch the working state according to the second execution information; the third processing submodule is used for controlling the mechanical arm of the playing robot to carry out playing operation and playing fourth audio indicating the category of the target playing chess pieces; the fourth processing submodule is used for responding to the end of chess playing of the playing robot and acquiring fourth execution information; and controlling the signal lamp to switch the working state according to the fourth execution information, and playing a third audio.
In some embodiments, the third processing sub-module comprises: the third acquisition submodule is used for acquiring and analyzing the current chess game information; a fourth determining submodule, configured to determine the target playing pieces and target playing coordinate data corresponding to the target playing pieces in response to that the current playing game information satisfies the preset playing rules and does not satisfy playing end conditions; and the fifth processing submodule is used for controlling the mechanical arm to carry out chess playing operation based on the target playing chess pieces and the target playing coordinate data.
In some embodiments, the receiving module 71 includes: the third receiving submodule is used for responding to the touch operation of a third key executed by the target interactive object and receiving a playing regret instruction; the first determining module 72 includes: a fifth determining submodule, configured to determine, according to the playing regret instruction, third execution information corresponding to the touch operation of the third key; the first processing module 73 includes: and the sixth processing submodule is used for controlling a signal lamp to switch the working state according to the third execution information and playing a fifth audio for indicating the target interactive object to move again.
In some embodiments, the apparatus further comprises: the second determination module is used for determining a target playing result in response to the fact that the current playing game information meets the preset playing rules and meets playing ending conditions; the second processing module is used for acquiring fifth execution information based on the target playing result; and controlling a signal lamp to switch the working state according to the fifth execution information, and playing a sixth audio indicating the target game result.
In some embodiments, the apparatus further comprises: the third processing module is used for responding that the current chess playing information does not meet the preset playing rule and acquiring sixth execution information; and controlling a signal lamp to switch the working state according to the sixth execution information, and playing a seventh audio indicating chess moving error alarm.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
In the embodiments of the present disclosure, if the playing method is implemented in the form of a software functional module and sold or used as a standalone product, the playing method may be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
The disclosed embodiment provides a chess-playing robot, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the computer program to realize the steps of the method.
The disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
The disclosed embodiments provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program that when read and executed by a computer performs some or all of the steps of the above method. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that fig. 8 is a schematic hardware entity diagram of a computer device in an embodiment of the present disclosure, and as shown in fig. 8, the hardware entity of the computer device 800 includes: a processor 801, a communication interface 802, and a memory 803, wherein:
the processor 801 generally controls the overall operation of the computer device 800.
The communication interface 802 may enable the computer device to communicate with other terminals or servers via a network.
The Memory 803 is configured to store instructions and applications executable by the processor 801, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 801 and modules in the computer apparatus 800, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM). Data may be transferred between the processor 801, the communication interface 802, and the memory 803 via the bus 804.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure.

Claims (13)

1. A playing method, applied to playing robots, the method comprising:
receiving a target information instruction in response to target interaction operation which is executed by a target interaction object and meets a preset condition;
determining execution information matched with the target interactive operation according to the target information instruction;
and carrying out target playing operation with the target interaction object according to the execution information.
2. The method according to claim 1, wherein before receiving a target information instruction in response to a target interaction operation performed by a target interaction object and satisfying a preset condition, the method further comprises:
collecting current scene information of a target playing area;
carrying out face detection processing on the current scene information to obtain a face detection result;
and determining the interaction information matched with the target interaction object based on the face detection result.
3. The method of claim 2, wherein determining interaction information that matches the target interaction object based on the face detection result comprises:
responding to the face detection result that the target interaction object exists, and playing a first audio indicating opening operation; and/or
And responding to the face detection result that the target interaction object does not exist, and playing a second audio for stopping the opening operation.
4. The method according to any one of claims 1 to 3, wherein the receiving a target information instruction in response to a target interaction operation performed by a target interaction object, which satisfies a preset condition, comprises:
receiving a playing starting instruction in response to the touch operation of the first key executed by the target interactive object;
the determining, according to the target information instruction, execution information matched with the target interactive operation includes:
determining first execution information matched with the touch operation of the first key according to the playing starting instruction;
the performing of the target playing operation with the target interaction object according to the execution information includes:
collecting current chess game information;
responding to the current chess playing information to meet preset chess playing rules, controlling a signal lamp to switch the working state according to the first execution information, and playing a third audio frequency for indicating the target interaction object to carry out chess playing operation.
5. The method of claim 4, wherein the signal light comprises a first signal light and a second signal light, the operating state comprises a first operating state and a second operating state,
the control signal lamp switches working states, and comprises: controlling the first signal lamp and the second signal lamp to be switched from the first working state to the second working state, wherein the first working state represents a non-enabled state, and the second working state represents an enabled state; the first signal lamp is used for indicating the chess playing starting state, and the second signal lamp is used for indicating the chess playing state of the target interaction object.
6. The method according to any one of claims 1 to 5, wherein the receiving a target information instruction in response to a target interaction operation performed by a target interaction object, which satisfies a preset condition, comprises:
responding to the touch operation of a second key executed by the target interactive object after the chess playing operation is finished, and receiving a chess playing instruction;
the determining, according to the target information instruction, execution information matched with the target interactive operation includes:
determining second execution information corresponding to the touch operation of the second key according to the playing and chess playing instruction;
the performing of the target playing operation with the target interaction object according to the execution information includes:
controlling the signal lamp to switch the working state according to the second execution information;
controlling the mechanical arm of the playing robot to carry out playing operation and playing fourth audio indicating the category of the target playing pieces;
responding to the chess playing completion of the playing robot, and acquiring fourth execution information; and controlling the signal lamp to switch the working state according to the fourth execution information, and playing a third audio.
7. The method according to claim 6, wherein the controlling of the mechanical arm of the playing robot for playing chess operation comprises:
collecting and analyzing current chess game information;
determining the target playing chessmen and target playing coordinate data corresponding to the target playing chessmen in response to the current playing game information meeting the preset playing rules and not meeting playing ending conditions;
and controlling the mechanical arm to carry out chess playing operation based on the target playing chess pieces and the target playing coordinate data.
8. The method according to any one of claims 1 to 7, wherein the receiving a target information instruction in response to a target interaction operation performed by a target interaction object, which satisfies a preset condition, comprises:
responding to the touch operation of a third key executed by the target interactive object, and receiving a playing regret instruction;
the determining, according to the target information instruction, execution information matched with the target interactive operation includes:
determining third execution information corresponding to the touch operation of the third key according to the playing regret instruction;
the playing process with the target interactive object according to the execution information comprises the following steps:
and controlling a signal lamp to switch the working state according to the third execution information, and playing a fifth audio indicating that the target interaction object moves again.
9. The method according to any one of claims 1 to 8, further comprising:
determining a target playing result in response to the current playing game information meeting the preset playing rules and meeting playing ending conditions;
acquiring fifth execution information based on the target playing result; and controlling a signal lamp to switch the working state according to the fifth execution information, and playing a sixth audio indicating the target game result.
10. The method according to any one of claims 1 to 8, further comprising:
responding to the current chess playing information not meeting the preset playing rules, and acquiring sixth execution information; and controlling a signal lamp to switch the working state according to the sixth execution information, and playing a seventh audio indicating chess moving error alarm.
11. A device of playing chess, characterized in that, applied to a robot of playing chess, the device of playing chess includes:
the receiving module is used for responding to target interaction operation which is executed by a target interaction object and meets a preset condition, and receiving a target information instruction;
the first determining module is used for determining execution information matched with the target interactive operation according to the target information instruction;
and the first processing module is used for carrying out playing processing on the target interaction object according to the execution information.
12. A gaming robot comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor when executing the program implements the steps in the method of any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202111449490.XA 2021-11-30 2021-11-30 Method and device for playing chess, chess playing robot and computer storage medium Withdrawn CN114179100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111449490.XA CN114179100A (en) 2021-11-30 2021-11-30 Method and device for playing chess, chess playing robot and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111449490.XA CN114179100A (en) 2021-11-30 2021-11-30 Method and device for playing chess, chess playing robot and computer storage medium

Publications (1)

Publication Number Publication Date
CN114179100A true CN114179100A (en) 2022-03-15

Family

ID=80541935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111449490.XA Withdrawn CN114179100A (en) 2021-11-30 2021-11-30 Method and device for playing chess, chess playing robot and computer storage medium

Country Status (1)

Country Link
CN (1) CN114179100A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105664476A (en) * 2015-12-08 2016-06-15 马科峰 Chess set used for human-computer fighting, chess set system and interaction method
CN105983229A (en) * 2015-02-02 2016-10-05 南君钰 Electronic go game board and gaming method independently allowing human-machine or online go game
CN107729983A (en) * 2017-09-21 2018-02-23 北京深度奇点科技有限公司 A kind of method, apparatus and electronic equipment using realizing of Robot Vision man-machine chess
CN110237522A (en) * 2019-05-24 2019-09-17 李娜 A kind of intelligence intelligence development robot
CN111150992A (en) * 2019-12-13 2020-05-15 重庆智能机器人研究院 Chess playing device and method
CN112717366A (en) * 2021-01-15 2021-04-30 广东科学技术职业学院 Teaching equipment for intelligent operation of gobang
CN213992852U (en) * 2020-09-02 2021-08-20 苏州探寻文化科技有限公司 Multifunctional intelligence-benefiting chess table set
US20210260484A1 (en) * 2019-04-04 2021-08-26 Tencent Technology (Shenzhen) Company Limited Object control method and apparatus, storage medium, and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105983229A (en) * 2015-02-02 2016-10-05 南君钰 Electronic go game board and gaming method independently allowing human-machine or online go game
CN105664476A (en) * 2015-12-08 2016-06-15 马科峰 Chess set used for human-computer fighting, chess set system and interaction method
CN107729983A (en) * 2017-09-21 2018-02-23 北京深度奇点科技有限公司 A kind of method, apparatus and electronic equipment using realizing of Robot Vision man-machine chess
US20210260484A1 (en) * 2019-04-04 2021-08-26 Tencent Technology (Shenzhen) Company Limited Object control method and apparatus, storage medium, and electronic device
CN110237522A (en) * 2019-05-24 2019-09-17 李娜 A kind of intelligence intelligence development robot
CN111150992A (en) * 2019-12-13 2020-05-15 重庆智能机器人研究院 Chess playing device and method
CN213992852U (en) * 2020-09-02 2021-08-20 苏州探寻文化科技有限公司 Multifunctional intelligence-benefiting chess table set
CN112717366A (en) * 2021-01-15 2021-04-30 广东科学技术职业学院 Teaching equipment for intelligent operation of gobang

Similar Documents

Publication Publication Date Title
JP5285234B2 (en) Game system, information processing system
CA2745235C (en) Interactive game pieces using touch screen devices for toy play
US10248846B2 (en) Information processing device
CN105536266B (en) The control method and system that intelligent building blocks game device, intelligent building blocks are played
US9636576B2 (en) Gaming system and gaming device
US8062131B2 (en) Game system and game apparatus used for the same
EP3228370A1 (en) Puzzle system interworking with external device
KR101421708B1 (en) Smart Terminal for Applicable Analog Game and Analog Game Method Using Smart Terminal
EP2919099B1 (en) Information processing device
KR20090025172A (en) Input terminal emulator for gaming devices
US20170056783A1 (en) System for Obtaining Authentic Reflection of a Real-Time Playing Scene of a Connected Toy Device and Method of Use
US11270087B2 (en) Object scanning method based on mobile terminal and mobile terminal
CN109243248A (en) A kind of virtual piano and its implementation based on 3D depth camera mould group
EP3638386B1 (en) Board game system and method
CN114179100A (en) Method and device for playing chess, chess playing robot and computer storage medium
CN112809709B (en) Robot, robot operating system, robot control device, robot control method, and storage medium
CN111077993B (en) Learning scene switching method, electronic equipment and storage medium
CN115543135A (en) Control method, device and equipment for display screen
TWI729323B (en) Interactive gamimg system
CN111176535B (en) Screen splitting method based on intelligent sound box and intelligent sound box
CN110570851A (en) voice recognition card playing control method and system, mobile terminal and storage medium
CN110084979B (en) Human-computer interaction method and device, controller and interaction equipment
CN101829426A (en) Electronic chessboard gaming systems, configuration methods thereof, electronic chessboard and computer system
CN113209609B (en) Interaction method, device, equipment and medium based on card objects
TWI813343B (en) Optical recognition control interactive toy and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220315

WW01 Invention patent application withdrawn after publication