CN112755520A - Chess force improving method based on Alpha-Beta pruning algorithm - Google Patents

Chess force improving method based on Alpha-Beta pruning algorithm Download PDF

Info

Publication number
CN112755520A
CN112755520A CN202110067095.9A CN202110067095A CN112755520A CN 112755520 A CN112755520 A CN 112755520A CN 202110067095 A CN202110067095 A CN 202110067095A CN 112755520 A CN112755520 A CN 112755520A
Authority
CN
China
Prior art keywords
chess
search
thread
algorithm
alpha
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110067095.9A
Other languages
Chinese (zh)
Inventor
寇英翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110067095.9A priority Critical patent/CN112755520A/en
Publication of CN112755520A publication Critical patent/CN112755520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/46Computing the game score
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/798Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/558Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history by assessing the players' skills or ranking
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/61Score computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a chess force improving method based on an Alpha-Beta pruning algorithm, which comprises the following steps: s1, traversing the chessboard to obtain the number of various chess types of both sides of the game on the chessboard; s2, obtaining scores of various chess types based on the weight of each chess type of both game parties; s3, calculating and killing based on the chess type scores of both game parties, if the calculation and killing is successful, obtaining the chess moving position based on the calculation and killing result, and if the calculation and killing is failed, executing the step S4; s4, carrying out multi-thread search by adopting an confrontation search algorithm and an Alpha-beta pruning algorithm, and acquiring the optimal chess moving position based on the multi-thread search result; and S5, increasing the search depth, repeating the step S4 to perform iteration deepening search until an iteration condition is met, and acquiring a final chess-moving position based on the score of the optimal chess-moving position acquired in each iteration process. The invention can effectively improve chess force and chess playing efficiency.

Description

Chess force improving method based on Alpha-Beta pruning algorithm
Technical Field
The invention relates to the technical field of machine games, in particular to a chess force improving method based on an Alpha-Beta pruning algorithm.
Background
The gobang is a classic pure strategy type chess game played by two persons. Compared with chess, Chinese chess, I-go and Japanese general chess, the gobang is simple and easy to learn, but is not easy to master. Many scholars have made intensive studies in the field of gobang gaming: by researching the evaluation values of various chess types, Zhang Guang et al optimize the evaluation function of the gobang and solve the problem that part gobang programs are incomplete in evaluation value; maolimin and the like research and develop a gobang robot capable of playing games in a real object by combining image acquisition and analysis; chengyu and the like optimize the problem of low chess playing efficiency by improving the Alpha Beta pruning algorithm; and the learner designs and realizes a complete set of complete gobang man-machine game software. However, the gobang game system still has the problems of low computational efficiency and low game level, so that a chess force improving method based on the Alpha-Beta pruning algorithm is needed to be provided, and the chess force is improved while the chess playing efficiency is ensured.
Disclosure of Invention
The invention aims to provide a chess force improving method based on an Alpha-Beta pruning algorithm, which aims to solve the technical problems in the prior art and can effectively improve chess force and chess playing efficiency.
In order to achieve the purpose, the invention provides the following scheme: the invention provides a chess force improving method based on an Alpha-Beta pruning algorithm, which comprises the following steps:
s1, traversing the chessboard to obtain the number of various chess types of both sides of the game on the chessboard;
s2, obtaining scores of various chess types based on the weight of each chess type of both game parties;
s3, calculating and killing based on the chess type scores of both game parties, if the calculation and killing is successful, obtaining the chess moving position based on the calculation and killing result, and if the calculation and killing is failed, executing the step S4;
s4, carrying out multi-thread search by adopting an confrontation search algorithm and an Alpha-beta pruning algorithm, and acquiring the optimal chess moving position based on the multi-thread search result;
and S5, increasing the search depth, repeating the step S4 to perform iteration deepening search until an iteration condition is met, and acquiring a final chess-moving position based on the score of the optimal chess-moving position acquired in each iteration process.
Preferably, in step S1, the chess type includes: five, live four, chong four, live three, sleeping three, live two, sleeping two and live one.
Preferably, in step S2, the weight distribution of each chess type is: even 5 is more than alive 4 is more than dash 4, alive 3 is more than dormancy 3, alive 2 is more than dormancy 2, alive 1.
Preferably, in step S3, the algorithm obtains the chess playing position by using a confrontational search algorithm.
Preferably, the confrontation search algorithm is used for searching based on a game tree model, the game tree model comprises a plurality of Max layers and a plurality of Min layers, the Max layers and the Min layers are sequentially arranged at intervals, and the Max layers and the Min layers respectively represent game parties.
Preferably, in step S4, the specific method of multi-thread search includes:
s4.1, scoring each node in the game tree model by adopting an evaluation function of a heuristic evaluation algorithm to obtain nodes with the scores larger than a preset threshold value;
s4.2, independently carrying out two-layer search once on the top layer of the game tree model to obtain the highest scoring node in the two layers;
s4.3, based on the nodes with the scores larger than the preset threshold value in the step S4.1, acquiring a thread pool consisting of a plurality of threads, initializing the thread pool, packaging the search tasks, inserting the nodes with the highest scores obtained in the step S4.2 into the head of each search task, and then distributing the nodes to each thread for searching to obtain the best chess-moving position score of each thread;
and S4.4, acquiring the optimal chess moving position based on the optimal chess moving position scoring result of each thread.
Preferably, in step S4.3, the search tasks are packed and then inserted into each search initial node, and the search is performed according to each thread.
Preferably, in the step S4.3, in the process of searching by each thread, a situation score estimation algorithm is used to obtain an optimal chess moving position score of the thread.
Preferably, in step S5, the search result in each iteration is stored in the substitution table.
The invention discloses the following technical effects:
aiming at the problems of low computational efficiency and low game level of a gobang game system, the invention provides a set of chess force improving method based on an Alpha-Beta pruning algorithm, and introduces a heuristic evaluation method, a substitution table, multiple threads and a shallow optimal algorithm to optimize the search efficiency and improve the chess force. The experimental result shows that the chess playing efficiency is greatly improved, and the acceptable search depth is improved to 6 layers; meanwhile, the thought of arithmetic killing and iterative deepening is introduced, the chess playing force of the AI is obviously improved, and the method has great advantages particularly in the aspect of shallow killing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of the chess force improvement method based on Alpha-Beta pruning algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, the present embodiment provides a method for improving chess playing ability based on Alpha-Beta pruning algorithm, including the following steps:
s1, traversing the chessboard to obtain the number of various chess types of both sides of the game on the chessboard; the game parties comprise my party and enemy;
the chess type includes: five, live four, chong four, live three, sleeping three, live two, sleeping two and live one;
wherein, the five pieces are at least five pieces of chess pieces with the same color;
if the fourth point exists, two points are connected with the fifth point, namely two points can form the fifth point;
the fourth stroke is that a point connecting five points exists;
the third movable chess piece can form the fourth movable chess piece;
sleeping three, namely three chess pieces which can only form four rushes;
the second movable chess pieces can form a third movable chess piece;
the second sleep is two chess pieces which can only form the third sleep;
one piece can form five pieces.
S2, obtaining scores of various chess types based on the weight of each chess type of both game parties;
the scores of the respective chess types are obtained based on the weighted sum of the respective chess types.
The weight distribution of various chess types is as follows: the number of the connecting 5 is more than the number of the living 4, the number of the rushing 4 is more than the number of the sleeping 3, the number of the living 2 is more than the number of the sleeping 2, the number of the living 1 is more than the number of the sleeping 3; the weights between levels differ by more than a factor of 10. In this example, the weights of "continuously" 5, live "4, dash 4, live" 3 ", sleep" 3 ", live" 2 ", sleep" 2 "and live" 1 "are respectively: 104,103,102,102,10,10,1,1. Except the eight chess types, the weight is set to be 0, such as a dead two, a dead three and a dead four chess types.
S3, calculating and killing based on the chess type scores of both game parties, if the calculation and killing is successful, obtaining the chess moving position based on the calculation and killing result, and if the calculation and killing is failed, executing the step S4;
the calculating and killing adopts a confrontation search algorithm to obtain the chess moving position; in the searching process, when the chess playing position is considered, only whether the move can be formed is considered, and the defect of aggressiveness is effectively overcome.
The confrontation search algorithm is used for searching based on a game tree model, the game tree model comprises a plurality of Max layers and a plurality of Min layers, and the Max layers and the Min layers are sequentially arranged at intervals. The Max layer and the Min layer respectively represent my party and enemy party of the two game parties; through the countermeasure search, the best chess moving position is found for the Max layer.
The countermeasure search algorithm is a maximum minimum search algorithm, which is an algorithm that finds the minimum of the maximum likelihood of failure (i.e., minimizes the maximum benefit of the adversary). The basic idea is as follows:
max and Min represent both parties to the game; p represents a game (i.e., a state); favoring the MAX situation, f (p) taking positive values; in favor of the situation of MIN, f (p) takes a negative value; equilibrium situation, f (p) takes zero value;
MAX considers the worst case when it is the turn to MIN walking, i.e. f (p) takes a minimum;
when it is time to go to MAX, MAX should consider the best case, i.e., f (p) takes the maximum;
corresponding to the countermeasure strategy of two players, the two methods (1) and (2) are alternately used for transmitting the reverse thrust value.
In this embodiment, the search rule in the algorithm killing process is as follows:
(1) max layer: only searching chess killing nodes of AI (forming attack chess types of live three and attack four);
(2) min layer: searching chess killing nodes of the black chess and nodes with scores exceeding a preset threshold value of the black chess, and preferentially ensuring that the black chess is not output;
(3) if the killing situation is found, the killing is successful, otherwise, the killing is failed.
S4, multi-thread searching is carried out by adopting an confrontation searching algorithm and an Alpha-beta pruning algorithm, and the optimal chess moving position is obtained based on the multi-thread searching result.
The Alpha-beta pruning algorithm is an algorithm optimized aiming at a maximum and minimum search algorithm. The idea is as follows:
at the MAX level, assuming the current level has searched for a maximum value of X, if it is found that the next level of the next node (i.e., MIN level) will produce a value less than X, then the node is pruned directly.
At the MIN level, assuming the current level has searched for a minimum value of Y, if it is found that the next level of the next node (i.e., the MAX level) will produce a value greater than Y, then the node is pruned directly.
Because the current search method is single-thread serial, the advantage of multi-core cannot be well utilized, and the waste of computing resources is caused. Therefore, the invention adds the thread pool, and realizes multi-thread calculation by distributing different search nodes to each thread at the top layer of search. For example, four-thread calculation is currently set, 20 nodes with excellent scores are obtained by a heuristic evaluation function in a main thread at the top layer, and 1-5 nodes, 6-10 nodes, 11-15 nodes and 16-20 nodes are respectively packaged into 4 tasks and are allocated to the four threads to run. And the main thread waits for the threads to finish running, and selects the optimal position and returns the optimal position by comparing the node and the score selected by each thread.
However, in practical tests, it is found that since the position of partial thread search is not excellent, such as the fourth thread, the obtained search task is 5 nodes with lower scores, and the method is not good when Alpha-beta pruning is performed. The node number of the whole tree under the single-thread search is smaller than that of the whole tree under the multi-thread search, and the multi-thread search time is longer than that of the single-thread search time.
In order to optimize the multi-thread search, the obtained nodes are also excellent considering that the speed of performing the 2-layer search is high. Therefore, when distributing tasks, a 2-layer search is firstly carried out, and the obtained nodes are inserted into the head of each task. Thus, each thread will search the better position first, and then perform alpha-beta pruning successfully.
The specific method for multi-thread search comprises the following steps:
s4.1, scoring each node in the game tree model by adopting an evaluation function of a heuristic evaluation algorithm to obtain nodes with the scores larger than a preset threshold value;
since the efficiency of Alpha-beta pruning is extremely dependent on the search order and the runtime will soon exceed the affordable range if every layer considers all possibilities. A heuristic evaluation function is therefore introduced, by traversing the incoming board, finding all available playing positions associated with the chessman-bearing positions (each chessman-bearing position extending 3 depths in 8 directions), and scoring these positions separately using the evaluation function of the heuristic evaluation algorithm. And sorting the nodes according to the scores, and returning a node position set with the score larger than a preset threshold value.
S4.2, independently carrying out two-layer search once on the top layer of the game tree model to obtain the highest scoring node in the two layers;
s4.3, based on the nodes with the scores larger than the preset threshold value in the step S4.1, acquiring a thread pool consisting of a plurality of threads, initializing the thread pool, packaging the search tasks, inserting the nodes with the highest scores obtained in the step S4.2 into the head of each search task, and then distributing the nodes to each thread for searching to obtain the best chess-moving position score of each thread;
and packing the search tasks, respectively inserting the packed search tasks into each search initial node, and searching according to each thread.
In the process of searching each thread, a situation estimation algorithm is adopted to obtain the best chess moving position score of the thread; the maximum and minimum value search depends on the situation scoring, so that the invention introduces a situation estimation algorithm: and through traversing the chessboard, respectively calculating the number of situations such as live two, live three, live four, sleep three, rushing four and the like of the party (white chess) and the enemy (black chess) to carry out score evaluation. When the specific score is designed, the absolute value of the score obtained in each situation of the black chess is increased, so that the attack of enemies can be prevented preferentially when the chess playing position is calculated, the condition that the chess is not lost is ensured, and the winning chess is considered.
And S4.4, acquiring the optimal chess moving position based on the optimal chess moving position scoring result of each thread.
And S5, increasing the search depth, repeating the step S4 to perform iteration deepening search until an iteration condition is met, and acquiring a final chess-moving position based on the score of the optimal chess-moving position acquired in each iteration process.
Because the current AI only compares the final results, the path length is not considered. So it is easy to find a double-three at level 6, and actually there is a double-three at level 4, because the AI will randomly choose a walk as the scores are the same. The chess can be won in two steps, and the AI does not need to go 3 steps.
The present invention introduces a deepened manner of iteration to solve this problem. Starting from layer 2, the search depth is increased step by step until a winning walk is found or a depth limit is reached.
Meanwhile, in the iterative search process, the search result in each iteration is stored in the substitution table.
When a very small search is actually performed, many repetitive searches are generated, such as: [7,7], [8,7], [7,6], [7,9 ]; the method is different from the following method only in sequence, and the final appearance is the same: [7,6], [7,9], [7,7], [8,7 ]; for the gobang, the situation is the same as long as the current situation is the same as the chessman. Therefore, if the first scoring is carried out, the first scoring is cached through the replacement table, and the second scoring can directly use the cached data; meanwhile, the search result is cached and used in the subsequent minimum search.
The substitution table can quickly calculate a Hash value corresponding to a situation by using a Zobrist algorithm: two-dimensional arrays are initialized. One for black chess and one for white flag. And filling a random number in each array, and carrying out XOR on the random numbers at corresponding positions in the Zobrist array by using the current key value every next step of chess to obtain a result, namely the new key value.
Because the 2-layer search time is far lower than the 4-layer search time, the 4-layer search time is far lower than the 6-layer search time, and the invention also uses the permutation table for optimization, the time consumption brought by iteration deepening can be ignored, and the chess force and the chess playing efficiency are effectively improved.
In order to further verify the effect of the chess force improvement method based on the Alpha-Beta pruning algorithm, the embodiment performs 5 times of man-machine battles without performing a calculation and killing step, and verifies the improvement effect of the chess force and the chess playing efficiency.
Through 5 man-machine battles, AI chess power is far better than people, has proved the validity that chess power improves the effect.
In order to verify the validity of each technical feature, this embodiment separately tests four cases, namely, the single-thread closing replacement table, the single-thread opening replacement table, the multi-thread closing replacement table, and the multi-thread opening replacement table, compares and verifies the calculation time average value of the first 10 steps, the time consumption of the single-thread and the four-thread search is shown in tables 1 and 2, respectively, and the time consumption of the single-thread and the four-thread search is shown in table 3, for example.
TABLE 1
Figure BDA0002904505360000111
TABLE 2
Figure BDA0002904505360000112
TABLE 3
Figure BDA0002904505360000113
As can be seen from tables 1 to 3, the substitution table and multithreading optimization effect was very significant. Particularly, under the condition that the depth is 6, after the four-thread matching displacement table is optimized, the time consumed by single-step calculation can be reduced to 6.64 seconds on average, compared with the single-thread non-displacement table search, the time consumed by the single-thread matching displacement table search is reduced by 53.5%, and the acceptable search depth is improved to 6 layers.
The invention has the following technical effects:
aiming at the problems of low computational efficiency and low game level of a gobang game system, the invention provides a set of chess force improving method based on an Alpha-Beta pruning algorithm, and introduces a heuristic evaluation method, a substitution table, multiple threads and a shallow optimal algorithm to optimize the search efficiency and improve the chess force. The experimental result shows that the chess playing efficiency is greatly improved, and the acceptable search depth is improved to 6 layers; meanwhile, the thought of arithmetic killing and iterative deepening is introduced, the chess playing force of the AI is obviously improved, and the method has great advantages particularly in the aspect of shallow killing.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (9)

1. A chess force improving method based on Alpha-Beta pruning algorithm is characterized by comprising the following steps:
s1, traversing the chessboard to obtain the number of various chess types of both sides of the game on the chessboard;
s2, obtaining scores of various chess types based on the weight of each chess type of both game parties;
s3, calculating and killing based on the chess type scores of both game parties, if the calculation and killing is successful, obtaining the chess moving position based on the calculation and killing result, and if the calculation and killing is failed, executing the step S4;
s4, carrying out multi-thread search by adopting an confrontation search algorithm and an Alpha-beta pruning algorithm, and acquiring the optimal chess moving position based on the multi-thread search result;
and S5, increasing the search depth, repeating the step S4 to perform iteration deepening search until an iteration condition is met, and acquiring a final chess-moving position based on the score of the optimal chess-moving position acquired in each iteration process.
2. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 1, wherein in said step S1, said chess pattern comprises: five, live four, chong four, live three, sleeping three, live two, sleeping two and live one.
3. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 2, wherein in the step S2, the weight distribution of each chess type is as follows: even 5 is more than alive 4 is more than dash 4, alive 3 is more than dormancy 3, alive 2 is more than dormancy 2, alive 1.
4. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 1, wherein in the step S3, the algorithm and algorithm are used for obtaining the chess moving position.
5. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 1, wherein the confrontation search algorithm is based on a game tree model for searching, the game tree model comprises a plurality of Max layers and a plurality of Min layers, the Max layers and the Min layers are sequentially arranged at intervals, and the Max layers and the Min layers respectively represent two game parties.
6. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 5, wherein in the step S4, the specific method of multi-thread search comprises:
s4.1, scoring each node in the game tree model by adopting an evaluation function of a heuristic evaluation algorithm to obtain nodes with the scores larger than a preset threshold value;
s4.2, independently carrying out two-layer search once on the top layer of the game tree model to obtain the highest scoring node in the two layers;
s4.3, based on the nodes with the scores larger than the preset threshold value in the step S4.1, acquiring a thread pool consisting of a plurality of threads, initializing the thread pool, packaging the search tasks, inserting the nodes with the highest scores obtained in the step S4.2 into the head of each search task, and then distributing the nodes to each thread for searching to obtain the best chess-moving position score of each thread;
and S4.4, acquiring the optimal chess moving position based on the optimal chess moving position scoring result of each thread.
7. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 6, wherein in the step S4.3, search tasks are packed and then inserted into each search initial node, and searching is performed according to each thread.
8. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 6, wherein in the step S4.3, during the searching process of each thread, a situation estimation algorithm is adopted to obtain the optimal chess moving position score of the thread.
9. The Alpha-Beta pruning algorithm-based chess force improvement method according to claim 1, wherein in the step S5, the search results in each iteration are stored in a substitution table.
CN202110067095.9A 2021-01-19 2021-01-19 Chess force improving method based on Alpha-Beta pruning algorithm Pending CN112755520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110067095.9A CN112755520A (en) 2021-01-19 2021-01-19 Chess force improving method based on Alpha-Beta pruning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110067095.9A CN112755520A (en) 2021-01-19 2021-01-19 Chess force improving method based on Alpha-Beta pruning algorithm

Publications (1)

Publication Number Publication Date
CN112755520A true CN112755520A (en) 2021-05-07

Family

ID=75702989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110067095.9A Pending CN112755520A (en) 2021-01-19 2021-01-19 Chess force improving method based on Alpha-Beta pruning algorithm

Country Status (1)

Country Link
CN (1) CN112755520A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105817029A (en) * 2016-03-14 2016-08-03 安徽大学 Mixed search method based on road and model in computer-game system of connect6
CN107622092A (en) * 2017-08-24 2018-01-23 河海大学 Searching method of the Chinese chess based on Multiple Optimization, Iterative deepening beta pruning
CN108416166A (en) * 2018-03-27 2018-08-17 北京理工大学 A kind of Chinese checkers method and system based on Dynamic estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105817029A (en) * 2016-03-14 2016-08-03 安徽大学 Mixed search method based on road and model in computer-game system of connect6
CN107622092A (en) * 2017-08-24 2018-01-23 河海大学 Searching method of the Chinese chess based on Multiple Optimization, Iterative deepening beta pruning
CN108416166A (en) * 2018-03-27 2018-08-17 北京理工大学 A kind of Chinese checkers method and system based on Dynamic estimation

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
中国人工智能学会: "《中国人工智能进展(2009)》", 31 December 2009, pages: 755 - 757 *
李凤麟: "《计算机算法的实际运用与实践研究》", 30 April 2018, pages: 208 - 215 *
董慧颖等: "多种搜索算法的五子棋博弈算法研究", 《沈阳理工大学学报》 *
董慧颖等: "多种搜索算法的五子棋博弈算法研究", 《沈阳理工大学学报》, 30 April 2017 (2017-04-30), pages 39 - 43 *
郑健磊,匡芳君: "基于极小极大值搜索和Alpha Beta剪枝算法的五子棋智能博弈算法研究与实现", 《温州大学学报(自然科学版)》 *
郑健磊,匡芳君: "基于极小极大值搜索和Alpha Beta剪枝算法的五子棋智能博弈算法研究与实现", 《温州大学学报(自然科学版)》, 25 August 2019 (2019-08-25), pages 53 - 62 *

Similar Documents

Publication Publication Date Title
Hauptman et al. GP-endchess: Using genetic programming to evolve chess endgame players
Hutchinson et al. Selectively enhanced motion perception in core video gamers
Chaslot et al. Adding expert knowledge and exploration in Monte-Carlo Tree Search
CN107622092A (en) Searching method of the Chinese chess based on Multiple Optimization, Iterative deepening beta pruning
Wu et al. Multi-stage temporal difference learning for 2048
van der Werf et al. Solving Go on small boards
Benbassat et al. EvoMCTS: Enhancing MCTS-based players through genetic programming
Liu et al. Using CIGAR for finding effective group behaviors in RTS game
US7793936B2 (en) Draw for battle
CN112755520A (en) Chess force improving method based on Alpha-Beta pruning algorithm
Liu et al. Comparing heuristic search methods for finding effective group behaviors in RTS game
Uchibe et al. Incremental coevolution with competitive and cooperative tasks in a multirobot environment
Benbassat et al. EvoMCTS: A scalable approach for general game learning
Benbassat et al. Evolving both search and strategy for reversi players using genetic programming
Winands et al. The quad heuristic in Lines of Action
Thawonmas et al. Believable judge bot that learns to select tactics and judge opponents
Jaśkowski et al. Winning ant wars: Evolving a human-competitive game strategy using fitnessless selection
Takano et al. Self-play for training general fighting game AI
CN113705828A (en) Battlefield game strategy reinforcement learning training method based on cluster influence degree
Lu et al. Playing Mastermind game by using reinforcement learning
Wang et al. Application and optimization of the UCT algorithm in Einstein würfelt nicht!
Oh et al. Imitation learning for combat system in RTS games with application to starcraft
Sahu et al. TIC-TAC-TOE game between computers: a computational intelligence approach
Gao et al. A speculative strategy
Hu et al. An improved knowledge base for Chinese chess game

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507

RJ01 Rejection of invention patent application after publication