Disclosure of Invention
The automatic guided vehicle dispatching method and the related device can learn the optimal rule in operation by means of the reinforcement learning network, so that the complex work of manually designing the passing rule is omitted, and the dispatching efficiency is improved.
In order to achieve the above object, the present application provides an automatic guided vehicle dispatching method, which includes:
inputting the current paths and the paths of the next stage of the automatic guided vehicles into the reinforcement learning network to obtain the priorities of the automatic guided vehicles;
And sequentially determining whether to send the path of the next stage of the automatic guided vehicles to the automatic guided vehicles according to the priority order of the automatic guided vehicles.
Wherein inputting the current path and the next-stage path of the plurality of automated guided vehicles into the reinforcement learning network comprises: determining a global path for each automated guided vehicle based on the mission of each automated guided vehicle; dividing the global path of each automated guided vehicle into at least one local path;
Inputting the current path and the next-stage path of the plurality of automated guided vehicles into the reinforcement learning network, comprising: inputting the current local paths and the local paths of the next stage of the automatic guided vehicles into a reinforcement learning network;
Sequentially determining whether to send a next-stage path of the automated guided vehicle to the automated guided vehicle comprises: sequentially confirming whether the local path of the next stage of the automatic guided vehicle is sent to the automatic guided vehicle.
The method takes the current local paths of a plurality of automatic guided vehicles as the input of the reinforcement learning network, and comprises the following steps:
And taking the current local paths and the incomplete global paths of the automatic guided vehicles as the input of the reinforcement learning network, wherein the incomplete global paths of the automatic guided vehicles comprise the local paths of the next stage of the automatic guided vehicles.
The method takes the current local paths of a plurality of automatic guided vehicles as the input of the reinforcement learning network, and comprises the following steps:
Obtaining a grid map of each automatic guided vehicle based on the size of each automatic guided vehicle, the current local path and the incomplete global path and the size of the load of each automatic guided vehicle, wherein the pixel point of a first pixel value of the grid map of the automatic guided vehicle is a region corresponding to the current local path of the automatic guided vehicle, and the pixel point of a second pixel value of the grid map is a region corresponding to the incomplete global path of the automatic guided vehicle; taking the grid patterns of a plurality of automatic guided vehicles as the input of the reinforcement learning network; or alternatively, the first and second heat exchangers may be,
And forming a feature matrix based on the sizes of the plurality of automatic guided vehicles, the current local path and the incomplete global path and the sizes of the loads of the plurality of automatic guided vehicles, and taking the feature matrix as the input of the reinforcement learning network.
Wherein the method further comprises:
counting the efficiency of dispatching the automatic guided vehicles;
the reinforcement learning network is optimized for efficiency.
The reinforcement learning network comprises a priority computing network and a value function network, wherein the input of the priority computing network is a grid chart of a plurality of automatic guided vehicles, the output of the priority computing network is the priority of the plurality of automatic guided vehicles, the input of the value function network is the grid chart and the priority of the plurality of automatic guided vehicles, and the output of the value function network is a predicted value of the task quantity completed in unit time;
Statistics of efficiency of dispatching automated guided vehicles, comprising: counting the true value of the task quantity completed in unit time;
A reinforcement learning network optimized for efficiency, comprising: taking the square error between the predicted value and the true value of the task amount completed in unit time as loss, and optimizing parameters of a value function network;
And taking the gradient of the priority of the automatic guided vehicle in the value function network to the task amount completed in unit time as the gradient of the priority of the automatic guided vehicle in the priority calculation network so as to optimize the parameters of the priority calculation network.
Wherein, confirm whether to send the local route of the next stage of automatic guided vehicle to automatic guided vehicle in proper order, include:
determining whether nodes in a local path of a next stage of the automated guided vehicle are locked by other automated guided vehicles;
if the local route is locked, determining that the local route of the next stage is not sent to the automatic guided vehicle;
If not, the local path of the next stage is sent to the automatic guided vehicle, and the node of the local path of the next stage is locked.
Wherein letting nodes of the local path of the next stage of transmission be locked includes:
When the automatic guided vehicle receives the local path of the next stage, locking the nodes of the local path of the next stage; or alternatively, the first and second heat exchangers may be,
When the local path of the next stage is transmitted to the automated guided vehicle, the node of the local path of the next stage is locked.
Wherein the method further comprises:
and acquiring information of locking nodes and unlocking nodes of the automatic guided vehicles from the plurality of automatic guided vehicles.
Wherein segmenting the global path of each automated guided vehicle into at least one local path comprises:
The global path of each automated guided vehicle is segmented into at least one local path according to the length of the global path and/or the turning point position.
Wherein determining a global path for each automated guided vehicle based on the mission of each automated guided vehicle comprises: based on the starting point and the ending point of the task of each automatic guided vehicle, determining a global path with optimal cost of each automatic guided vehicle by using an A-type algorithm.
In order to achieve the above object, the present application provides an automated guided vehicle dispatching device, which includes a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program to implement the steps in the above method.
To achieve the above object, the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above method.
The method of the application is as follows: the current paths and the paths of the next stages of the automatic guided vehicles are processed through the reinforcement learning network to obtain the priorities of the automatic guided vehicles, and then whether the paths of the next stages of the automatic guided vehicles are sent to the automatic guided vehicles or not is sequentially determined according to the priority order of the automatic guided vehicles.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present application, the automatic guided vehicle dispatching method and the related devices thereof provided by the present application are described in further detail below with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, fig. 1 is a flow chart of a first embodiment of an automatic guided vehicle dispatching method according to the present application. The automatic guided vehicle dispatching method of the embodiment comprises the following steps.
S101: the current path and the next path of the plurality of automatic guided vehicles are input to the reinforcement learning network to obtain the priority of the plurality of automatic guided vehicles.
Inputting the current paths of the automatic guided vehicles and the paths of the next stage to the reinforcement learning network, and calculating the current paths of the automatic guided vehicles and the paths of the next stage by the reinforcement learning network to obtain the priorities of the automatic guided vehicles.
The current path of the automated guided vehicle may include a path that has been sent to the automated guided vehicle and that the automated guided vehicle is traveling. It is understood that the current path of the automated guided vehicle may also include a path that has been sent to the automated guided vehicle and that the automated guided vehicle has not traveled. The next-stage path of the automated guided vehicle refers to a path to be transmitted to the automated guided vehicle at the next time in sequence.
Further, when the current path of the automated guided vehicle only includes the path which has been sent to the automated guided vehicle and is being driven by the automated guided vehicle, the path of the next stage of the automated guided vehicle is the path to be sent to the automated guided vehicle, and the automated guided vehicle is to be driven at the next time in sequence.
S102: and determining whether to send the path of the next stage of each automatic guided vehicle to the corresponding automatic guided vehicle according to the priority order of the automatic guided vehicles.
For example, if priorities of 5 automatic guided vehicles, i.e., A, B, C, D and E, are respectively lower, middle, highest, lowest and highest, it is firstly determined whether the path of the next stage of the C automatic guided vehicle can be sent to the corresponding automatic guided vehicle, secondly determined whether the path of the next stage of the E automatic guided vehicle can be sent to the corresponding automatic guided vehicle, thirdly determined whether the path of the next stage of the B automatic guided vehicle can be sent to the corresponding automatic guided vehicle, and finally determined whether the path of the next stage of the a automatic guided vehicle can be sent to the corresponding automatic guided vehicle.
In the embodiment, the current paths of the automatic guided vehicles and the paths of the next stage are processed through the reinforcement learning network to obtain the priorities of the automatic guided vehicles, and then whether the paths of the next stage of the automatic guided vehicles are sent to the automatic guided vehicles or not is sequentially determined according to the high-low sequence of the priorities of the automatic guided vehicles.
Further, as shown in fig. 2, the present embodiment divides a global path planned based on each task of each automated guided vehicle into at least one local path to improve the road surface utilization efficiency. The automatic guided vehicle dispatching method of the embodiment comprises the following steps.
S201: a global path for each automated guided vehicle is determined based on the mission of each automated guided vehicle.
The global path for each automated guided vehicle may be determined based on path planning based on information related to the mission of each automated guided vehicle.
Wherein the relevant information of the task comprises a starting point of the task and an ending point of the task. In addition, the related information of the task may further include information such as priority of the task. It is understood that the start of a task may refer to the end of the last task received by the corresponding automated guided vehicle in chronological order. For example, the vehicle a receives the task a first and then receives the task B, that is, the end point of the task a can be used as the start point of the task B. In other embodiments, due to the path planning algorithm, there may be a case where the task delegated to the automated guided vehicle is canceled, and the newly received task is taken as the task to be executed at the next time in sequence, so the start point of the newly received task may be the end point of the task being executed by the automated guided vehicle.
In one implementation, any path that can reach the end of a mission from the start of the mission can be selected as the global path for each automated guided vehicle.
In another implementation, a path planning algorithm may be utilized to obtain a global path with optimal cost based on information about the tasks of each automated guided vehicle. The path planning algorithm comprises an A-algorithm, a D-algorithm and the like.
In yet another implementation, a shortest path planning algorithm may be utilized to obtain a global path with a shortest distance based on information related to the mission of each automated guided vehicle.
It will be appreciated that the global path is for each task of each automated guided vehicle, i.e., each task of each automated guided vehicle has a respective global path.
S202: the global path of each automated guided vehicle is partitioned into at least one local path.
In one implementation, the global path may be partitioned into a predetermined number of local paths. For example, let the predetermined number be 5, each global path is divided into 5 local paths. Of course, without limitation, the predetermined amount may be other values of 4, 7, etc. Further, the lengths of the plurality of local paths of the same global path may be approximately equal, or the number of nodes on the plurality of local paths of the same global path may be equal.
In another implementation, the global path may be segmented according to a set path segmentation length to obtain at least one local path.
In yet another implementation, the global path may be partitioned according to a set number of path partition nodes to obtain at least one local path. For example, as shown in fig. 3, if the number of path dividing nodes is set to 6, the global path may be divided into an a→f local path, an f→k local path, and a k→m local path, where the a→f local path and the f→k local path include 6 nodes, and the k→m local path includes 3 nodes.
In yet another implementation, the global path is partitioned into at least one local path based on turning point locations of the global path. Specifically, the global path is divided by using turning points as dividing points, and for example, as shown in fig. 3, the global path includes 4 turning points of 1, 2, 3, and 4, and the global path is divided by using turning points as dividing points, and can be divided into an a→1 local path, a1→2 local path, a2→3 local path, and a 3→4 local path.
In another implementation manner, the global path may be further divided based on a set maximum length of the local path, so as to obtain at least one local path.
It is understood that the global path may be split into at least two local paths.
Of course, the global path may be divided into one local path, that is, the global path may be regarded as a local path without dividing the global path.
S203: the current local paths and the local paths of the next stage of the automatic guided vehicles are input into the reinforcement learning network to obtain the priorities of the automatic guided vehicles.
In an implementation scenario, the current local path and the local path of the next stage may belong to the same global path.
In another implementation scenario, the current local path and the next local path may also belong to two different global paths, respectively. For example, the current local path of the automated guided vehicle is the local path of task a being performed by the automated guided vehicle, and the local path of the next stage of the automated guided vehicle is the local path of task B to be performed by the automated guided vehicle at the next moment in the sequence.
Furthermore, all local paths which are not sent to the automatic guided vehicle in the global paths to which the local paths of the next stage belong can be combined into an incomplete global path. It will be appreciated that the incomplete global path includes the local path of the next stage. And then inputting the current local path and the global path which is not finished to the reinforcement learning network, so that the reinforcement learning network outputs from the global to optimize the passing efficiency of the automatic guided vehicle.
Since the scope of influence of the automated guided vehicle when traveling along the current local path is not limited to the nodes on the current local path, it is also possible to influence the nodes adjacent to the nodes on the current local path, and this scope of influence is related to the size of the automated guided vehicle itself, the size of the load, and the interval of map nodes. Therefore, in order to further improve the dispatching efficiency and the dispatching accuracy of the dispatching method, the sizes of the plurality of automatic guided vehicles and the sizes of the loads of the plurality of automatic guided vehicles can be input into the reinforcement learning network together with the current local path and the global path which is not completely walked, so that the reinforcement learning network determines the priorities of the plurality of automatic guided vehicles based on the sizes of the plurality of automatic guided vehicles, the sizes of the loads of the plurality of automatic guided vehicles, the current local path and the global path which is not completely walked.
It can be understood that the information such as the current path of each automatic guided vehicle and the path of the next stage can be integrated into the raster image or the feature matrix of each automatic guided vehicle, then the raster images or the feature matrices of a plurality of automatic guided vehicles are input into the reinforcement learning network, the raster images or the feature matrices are adopted to express the working conditions of a plurality of automatic guided vehicles, on one hand, the integrity of the information is ensured, and on the other hand, the input of the complex working conditions of a plurality of automatic guided vehicles can be processed by using a relatively mature technology such as a convolutional neural network, so that the operation efficiency can be improved.
For example, a feature matrix is formed based on the size of the plurality of automated guided vehicles, the current path and the next-stage path, and the size of the load of the plurality of automated guided vehicles, and the feature matrix is used as an input of the reinforcement learning network.
Further, the raster image of each automatic guided vehicle of the present application shows the sweeping surface of the automatic guided vehicle with different pixel values, such as the raster image shown in fig. 4. Specifically, the pixel point of the first pixel value of the raster image (the pixel point with the deepest color shown in fig. 4) is the sweep surface corresponding to the current local path of the automatic guided vehicle, and the pixel point of the second pixel value of the raster image (the pixel point with the middle color shown in fig. 4) is the sweep surface corresponding to the global path of the automatic guided vehicle which is not finished. Further, the pixel values of the pixel points (the pixel points with the lightest color shown in fig. 4) in the area which cannot be affected by the automated guided vehicle in the raster image can be unified to be the third pixel value.
It will be appreciated that a single-channel gray scale raster image may be used to represent information about an automated guided vehicle, and that multiple automated guided vehicle raster images may be combined to obtain a multi-channel raster image, and the combined multi-channel raster image may be input to the reinforcement learning network. In addition, raster images of a plurality of automated guided vehicles may be combined in a lane direction to obtain a multi-lane raster image.
Further, the algorithm for processing the raster image or the feature matrix in the reinforcement learning network to obtain the priorities of the plurality of automatic guided vehicles may be a convolutional neural network, where the raster image or the feature matrix is processed by the convolutional neural network to output vehicle priority vectors, and the lengths of the vectors are the same as the number of the automatic guided vehicles; and sequentially scheduling the vehicles to pass according to the priority of the automatic guided vehicles. The forward operation of the convolutional neural network is quick, and the problem of real-time performance of the genetic algorithm does not exist.
S204: and determining whether the local path of the next stage of each automatic guided vehicle can be sent to the corresponding automatic guided vehicle according to the priority order of the automatic guided vehicles.
As shown in fig. 5, each road resource node is allocated with a mutual exclusion lock, when an automatic guided vehicle needs to pass through a certain road node, the state of the mutual exclusion lock of the node needs to be queried first, and if the mutual exclusion lock is in a locking state, the automatic guided vehicle cannot acquire the node and needs to wait; if the mutual exclusion lock is in an open state, the automatic guided vehicle acquires the node and locks the mutual exclusion lock so as to prevent other automatic guided vehicles from entering the node; when the automatic guided vehicle drives away from the node, the mutual exclusion lock is released, so that the mutual exclusion lock can be used by other automatic guided vehicles. In the case that the node is assigned with the mutual exclusive lock, whether the local path of the next stage of the automated guided vehicle can be transmitted to the corresponding automated guided vehicle may be determined by confirming whether the node in the local path of the next stage of the automated guided vehicle is locked by other automated guided vehicles. Specifically, if a node in a local path of the next stage of the automatic guided vehicle is locked by other automatic guided vehicles, determining that the local path of the next stage is not sent to the automatic guided vehicle; if the node in the local path of the next stage of the automatic guided vehicle is not locked by other automatic guided vehicles, the local path of the next stage is sent to the automatic guided vehicle, and the local path of the next stage is locked.
Because the global optimal path is often very long, if the global optimal path is uniformly locked and protected, the overall efficiency of the system is greatly limited, but under the condition that the global path is divided into at least two local paths, the scheduling device sends the local path of the next stage to the automatic guided vehicle instead of sending the global path to the automatic guided vehicle, so that the automatic guided vehicle can only lock the received local path, all nodes on the global path are not locked, the nodes occupied by each automatic guided vehicle are reduced, the use of road surfaces is optimized, and the running efficiency of the automatic guided vehicle is improved.
In one implementation, the scheduler locks nodes of the next-stage local path when sending the next-stage local path to the automated guided vehicle, so that the next-stage local path is locked.
In another implementation, the scheduling device sends the local path of the next stage to the automated guided vehicle, and when the automated guided vehicle receives the local path of the next stage, the node of the local path of the next stage is locked, so that the local path of the next stage is locked. Therefore, by the mutual exclusion lock and the method for determining the sequence of the sending paths based on the priority, the paths sent to the automatic guided vehicles can be ensured not to be mutually impacted, the safety of a dispatching system is ensured, and the running efficiency of the automatic guided vehicles can be improved.
To further increase the road utilization, a gradual release strategy may be implemented for nodes on the path. Further, after the automatic guided vehicle runs through a part of the local path, the scheduling device or the automatic guided vehicle can unlock the nodes which are passed by and are not affected any more gradually, so that the nodes which are passed by and are not affected any more can be released for other automatic guided vehicles to use, and road resources are effectively utilized. Of course, when the automatic guided vehicle locks and unlocks the node, the state of locking, unlocking, running and the like of the automatic guided vehicle needs to be fed back to the scheduling device, so that the scheduling device can timely know the state of the road surface node, and whether the local path of the next stage needs to be sent to the corresponding automatic guided vehicle can be confirmed based on the locked state or the unlocked state of the road surface node, so that the scheduling device can effectively schedule the automatic guided vehicle.
For example, the local dynamic path A-B in FIG. 6, the automated guided vehicle locks node A, a, B, B simultaneously as the segment path is obtained; when the automatic guided vehicle runs to the point a and the point A is not influenced, the mutual exclusion lock of the point A needs to be released immediately, and the unlocking information is reported to an upper computer.
It will be appreciated that both the local path and the global path may be referred to as paths, i.e. the automated guided vehicle scheduling method of the first embodiment may comprise: inputting the current global paths of the automatic guided vehicles and the global paths of the next stage into the reinforcement learning network to obtain the priorities of the automatic guided vehicles, and determining the sequence of sending the global paths of the next stage to the automatic guided vehicles according to the priority sequence; or inputting the current local paths of the automatic guided vehicles and the local paths of the next stage into the reinforcement learning network to obtain the priorities of the automatic guided vehicles, and determining the sequence of sending the local paths of the next stage to the automatic guided vehicles according to the priority sequence; or inputting the current global path of the automatic guided vehicles and the local path of the next stage into the reinforcement learning network to obtain the priority of the automatic guided vehicles, and determining the sequence of sending the local path of the next stage to the automatic guided vehicles according to the priority sequence; or inputting the current local paths of the automatic guided vehicles and the global paths of the next stage into the reinforcement learning network to obtain the priorities of the automatic guided vehicles, and determining the sequence of sending the global paths of the next stage to the automatic guided vehicles according to the priority sequence.
It can be understood that the automatic guided vehicle dispatching method of the application can also count the efficiency of dispatching the automatic guided vehicle, optimize the reinforcement learning network based on the counted efficiency, namely, optimize the parameters in the convolutional neural network by taking the increase of the passing efficiency as an objective function.
For example, the task amount completed in unit time is counted, and the reinforcement learning network is optimized according to the task amount completed in unit time. It is understood that the amount of completed tasks per unit time refers to the total amount of completed tasks per unit time for all the dispatchers.
Optionally, the reinforcement learning network includes a priority calculation network and a value function network.
The input of the priority computing network is raster images of a plurality of automatic guided vehicles, and the output of the priority computing network is the priority of the plurality of automatic guided vehicles. Specifically, as shown in fig. 7, the priority computing network takes grid images of a plurality of automatic guided vehicles as input, outputs a one-dimensional vector with the same number as the input channels through a plurality of convolution layers, normalizes the value inside the vector to be between 0 and 1 through a sigmoid function, and the magnitude of the value represents the traffic priority of the automatic guided vehicle corresponding to the channel.
The input of the value function network is raster images and priorities of a plurality of automatic guided vehicles, and the output of the value function network is a predicted value of the task quantity completed in unit time. As shown in fig. 8, the value function network takes the multi-channel gray grid image and the vehicle priority vector as inputs, combines the two inputs in the middle of a plurality of convolution layers, and finally outputs a scalar which represents the predicted value of the task number completed in unit time under the current state and the strategy, so as to evaluate the quality of the current strategy.
The step of optimizing the reinforcement learning network in terms of the amount of tasks completed in a unit time may include: taking the square error between the predicted value and the true value of the task amount completed in unit time as loss, and optimizing parameters of a value function network; and taking the gradient of the priority of the automatic guided vehicle in the value function network to the task amount completed in unit time as the gradient of the priority of the automatic guided vehicle in the priority calculation network so as to optimize the parameters of the priority calculation network.
It will be appreciated that in calculating the priorities of the plurality of automated guided vehicles using the reinforcement learning network, the priority calculation network is used to calculate the priorities of the plurality of automated guided vehicles without enabling the value function network. When the reinforcement learning network needs to be optimized, a value function network is started to optimize the priority computing network.
The automatic guided vehicle dispatching method is generally realized by an automatic guided vehicle dispatching device, so the application also provides the automatic guided vehicle dispatching device. Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of an automatic guided vehicle dispatching device according to the present application. The automated guided vehicle dispatching device 10 comprises a processor 12 and a memory 11; the memory 11 is used for storing program instructions for implementing an automated guided vehicle scheduling method as described above, and the processor 12 is used for executing the program instructions stored in the memory 11.
Further, as shown in fig. 10, the application also discloses an automatic guided vehicle dispatching system 20, and the dispatching system 20 comprises a dispatching device 21 and an AGV vehicle group 22. The AGV cluster 22 includes at least one automated guided vehicle.
The scheduling device 21 is configured to assign each new task to an automated guided vehicle according to the current working set status after receiving the new task demand, then perform path planning based on each new task to obtain a global path of each new task, divide the global path into local paths, and add the local paths of the new task to the working set, and then confirm whether the global path or the local paths need to be sent to the corresponding automated guided vehicles.
The automated guided vehicle acquires the global path or the local path from the dispatching device 21 and runs along the current global path or the local path under the control of the motion controller.
Furthermore, under the condition that the path nodes are configured with mutual exclusion locks, the automatic guided vehicle can also perform unified locking on the received nodes on the global path or the local path so as to avoid collision with other automatic guided vehicles, and the safety is ensured. And the automated guided vehicles can also send the relevant information of the locking node to the scheduling device 21, so that the scheduling device 21 can know which nodes are locked when deciding whether to send the global path or the local path of the next stage of each automated guided vehicle to the corresponding automated guided vehicle, and further can not send the global path or the local path with the locked nodes to the corresponding automated guided vehicle, thereby avoiding the same node from being distributed to a plurality of automated guided vehicles and ensuring the safety of the automated guided vehicles. In addition, when the automatic guided vehicles drive through some nodes, the nodes which pass through and are not influenced by the nodes can be unlocked, so that the nodes can be used by other automatic guided vehicles, and the utilization efficiency of road surface resources is improved. Likewise, the automated guided vehicle may also send information about the unlocking node to the scheduling device 21. The scheduling device 21 may store the acquired information about the automatic guided vehicle unlocking node, the locking node, and the driving state in the working set.
Of course, in another implementation, the node may also be locked and unlocked by the scheduling means 21. Of course, the dispatcher 21 needs to know the location of the automated guided vehicle so that the dispatcher 21 can know which nodes the automated guided vehicle has driven through so that the corresponding nodes can be unlocked.
The logic process of the automatic guided vehicle dispatching method is presented as a computer program, and in terms of the computer program, if the automatic guided vehicle dispatching method is sold or used as a separate software product, the automatic guided vehicle dispatching method can be stored in a computer storage medium, so the application proposes a readable storage medium. Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a readable storage medium of the present application, in which a computer program 31 is stored in the readable storage medium 30 of the present embodiment, and the steps in the automatic guided vehicle dispatching method are implemented when the computer program 31 is executed by a processor.
The readable storage medium 30 may be a medium that may store a computer program, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or may be a server that stores the computer program, and the server may send the stored computer program to another device for running, or may also run the stored computer program by itself. The readable storage medium 30 may be a combination of entities from a physical entity, such as a plurality of servers, a server plus a memory, or a memory plus a removable hard disk.
The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.