CN111766881A - Multi-intelligent forklift combined operation control system - Google Patents
Multi-intelligent forklift combined operation control system Download PDFInfo
- Publication number
- CN111766881A CN111766881A CN202010603736.3A CN202010603736A CN111766881A CN 111766881 A CN111766881 A CN 111766881A CN 202010603736 A CN202010603736 A CN 202010603736A CN 111766881 A CN111766881 A CN 111766881A
- Authority
- CN
- China
- Prior art keywords
- algorithm
- control system
- point
- operation control
- path
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 claims abstract description 4
- 239000003016 pheromone Substances 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 abstract description 5
- 238000005457 optimization Methods 0.000 description 14
- 230000006872 improvement Effects 0.000 description 10
- 239000000463 material Substances 0.000 description 10
- 238000005286 illumination Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000009795 derivation Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000350052 Daniellia ogea Species 0.000 description 2
- DGAQECJNVWCQMB-PUAWFVPOSA-M Ilexoside XXIX Chemical compound C[C@@H]1CC[C@@]2(CC[C@@]3(C(=CC[C@H]4[C@]3(CC[C@@H]5[C@@]4(CC[C@@H](C5(C)C)OS(=O)(=O)[O-])C)C)[C@@H]2[C@]1(C)O)C)C(=O)O[C@H]6[C@@H]([C@H]([C@@H]([C@H](O6)CO)O)O)O.[Na+] DGAQECJNVWCQMB-PUAWFVPOSA-M 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 239000004459 forage Substances 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 2
- 229910052753 mercury Inorganic materials 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000011664 nicotinic acid Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001681 protective effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 229910052708 sodium Inorganic materials 0.000 description 2
- 239000011734 sodium Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
- G05D1/0236—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66F—HOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
- B66F9/00—Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
- B66F9/06—Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
- B66F9/075—Constructional features or details
- B66F9/07504—Accessories, e.g. for towing, charging, locking
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0259—Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/028—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Structural Engineering (AREA)
- Transportation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Geology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Civil Engineering (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to the technical field of forklifts, in particular to a combined operation control system for multiple intelligent forklifts. The robot comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by the robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions. The design of the invention can reasonably distribute the transportation tasks and the operation paths, and realize the optimal matching of different operation constraints and path space resources, thereby achieving the purposes of improving the overall efficiency of the system and reducing the production cost.
Description
Technical Field
The invention relates to the technical field of forklifts, in particular to a combined operation control system for multiple intelligent forklifts.
Background
The material is the most important component of the power grid construction. The quality of equipment materials is the basis of safe and stable operation of a power grid, timely supply of materials is the guarantee of the construction progress of the power grid, the scale of the power grid is continuously enlarged along with the development of social economy, the structure of the power grid is increasingly complex, the requirements on the equipment materials are improved, the difficulty of material management is increasingly high, and the research in the corresponding field of material management is urgently needed to improve the material management level. At present, materials are carried by a forklift mainly operated by manpower, on one hand, the forklift needs workers with certain proficiency, and the workers can not operate continuously in working time, so that more workers are needed to operate in turn, which can increase production cost.
Disclosure of Invention
The invention aims to provide a combined operation control system for multiple intelligent forklifts, which aims to solve the problems in the background technology.
In order to solve the technical problems, the invention aims to provide a combined operation control system for multiple intelligent forklifts, which comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by a robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
As a further improvement of the technical solution, the sensing unit includes a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; the map reconstruction module is used for establishing map information according to information fed back by various sensors.
The sensor module comprises a visual camera system, a three-dimensional laser sensor, a four-line laser sensor and a GPS positioning sensor.
The visual camera system includes light source, lens, camera, image collecting card and vision processor, the light source illumination is the important factor influencing the machine vision system input, it influences the quality and application effect of the input data directly, because there is not general machine vision lighting equipment, so for each specific application example, should choose the corresponding illuminator, in order to reach the best effect, the light source can be divided into visible light and invisible light, several commonly used visible light sources are white lamp, fluorescent lamp, mercury lamp and sodium lamp, on the other hand, the ambient light may influence the quality of the picture, so can adopt the method to add the protective screen to reduce the influence of the ambient light, the lighting system can be divided into according to its irradiation method: the back lighting, the forward lighting, the structured light and the stroboscopic light are used, wherein the back lighting is that the measured object is placed between the light source and the camera, the advantage of the device is that an image with high contrast can be obtained, the forward lighting is that the light source and the camera are positioned at the same side of the measured object, the mode is convenient to install, the structured light lighting is that a grating or a linear light source and the like are projected onto the measured object, the three-dimensional information of the measured object is demodulated according to the distortion generated by the grating or the linear light source, the stroboscopic light lighting is that a high-frequency light pulse is irradiated onto an object, and.
The CCD is a typical solid imaging device, the outstanding characteristic of the CCD is that the charge is used as a signal, and the CCD is different from other devices in that the current or voltage is used as a signal, the imaging device forms a charge packet through photoelectric conversion, and then transfers and amplifies the output image signal under the action of a driving pulse.
The image acquisition card can be PCI or AGP compatible capture card, and can quickly transfer the image to the computer memory for processing.
The three-dimensional laser sensor adopts a three-dimensional laser scanner, mainly comprises a Switzerland leica company, a faro company and a 3ddigital company in the United states, a rigel company in Austria, an optech company in Canada, a Mensi company in France, a Beijing Rong creative industry science and technology development company in China and the like, can directly acquire three-dimensional data of various entities or real scenes to obtain a sampling point set point cloud on the surface of a measured object, and has the characteristics of quickness, simplicity and accuracy. The three-dimensional model of the target can be rapidly reconstructed based on the data of the point cloud model and the distance image data, and various experimental data such as lines, surfaces, bodies and the like of a three-dimensional space can be obtained, such as surveying and mapping, metering, analysis, simulation, display, monitoring, virtual reality and the like.
Four-wire laser sensor adopts four-wire laser radar, and laser and microwave belong to the electromagnetic wave, and the derivation of laser radar working distance equation is similar with the derivation of microwave radar, can derive the laser radar equation from the microwave radar working distance equation:
in the formula, PRTo receive laser power, PTFor emitting laser power, GTFor transmit antenna gain, target scattering cross-section, D receive aperture, R lidar target reach, ηAtmη for single pass atmospheric transport coefficientSysIs the transmission coefficient of the optical system of the lidar.
As a further improvement of the technical solution, the map reconstruction module is based on an image feature matching algorithm, and the purpose of image feature matching is to find a conversion relationship between two frames, and since the extracted features require invariance to environmental changes and the like in a motion environment, currently, commonly used interest point detection operators include: in the embodiment, a SIFT feature point algorithm is adopted, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0,
As a further improvement of the technical scheme, the planning unit adopts an ant colony algorithm to find the path, and the ant colony algorithm is a bionic algorithm designed for simulating a behavior process that an ant forages the shortest path. When the combined optimization problem is solved by using an ant colony algorithm, firstly, the combined optimization problem is expressed into a standard form related to pheromones, then, each ant independently carries out decision-making construction solution according to local pheromones, and updates surrounding pheromones according to the quality of the solution, and the optimized solution of the combined optimization problem can be solved by repeatedly carrying out the process, wherein the method comprises the following steps:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
As a further improvement of the present technical solution, the formula of the wheel disc algorithm is as follows:
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
As a further improvement of the present technical solution, the algorithm formula for updating the pheromone matrix is as follows:
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
As a further improvement of the present solution, the decision unit is based on a shortest path planning algorithm, the position (x, y) of a path point is always detected in the path planning process, whether the point is in an obstacle or not is judged, namely whether the collision with the obstacle occurs or not and the detection result is returned to the system, the shortest path planning algorithm is that in a neural network of penalty functions from path points to an obstacle, the excitation functions of middle-layer nodes and top-layer nodes are taken as step functions, each node of the middle layer determines whether the node satisfies its constraint, if yes, the output is 1, otherwise the output is 0, if all the middle points are satisfied, the top level output is 1, which indicates that the point is inside the obstacle, if the middle point detects that at least one of the points does not satisfy the constraint, the top level output is 0, which indicates that the point is outside the obstacle, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1;
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
As a further improvement of the present technical solution, the formula of the obstacle avoidance algorithm is as follows:
as a further improvement of the present technical solution, the formula of the moving algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
as a further improvement of the technical scheme, the execution unit is designed based on AGV scheduling, a certain model is established, an optimized objective function corresponding to the transportation system is established according to the model, constraint conditions are established according to the model, transportation tasks and operation paths are reasonably distributed respectively, one-to-one correspondence of the transportation tasks and task execution vehicles is ensured, no-repeat and no-conflict system scheduling is ensured, optimal matching of different AGV operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved. The dispatching management of the automatic checking system comprises the steps of firstly making a checking plan in the material management system, sending a checking instruction to a standby robot according to the plan, automatically driving the robot to each bin, identifying actual inventory information by scanning bin two-dimensional codes, checking the actual inventory information with the existing inventory of a warehouse, recording checking results, and automatically generating a checking report and submitting the report to a platform after checking is finished.
At present, a Petri network model is widely applied to research of AGV scheduling problems, and can accurately describe the dynamic characteristics of a multi-AGV system. However, the Petri net is complex in modeling, complex in operation process and poor in real-time performance. The students of TanakaY and the like establish a time Petri network model and decompose the time Petri network model into a plurality of independent sub-networks, so that the problem that AGV scheduling optimization and paths do not collide is solved. NishiT et al also establish a Petri network model, decompose a complex network model into simple and independent subnets by adopting a certain decomposition method, and obtain a path optimization activation sequence of each AGV by combining a Lagrange relaxation method on the basis. Domestic Liu and the like adopt a coloring Petri network facing space resources to perform modeling analysis, and conflict-free and collision-free operation of the AGV is achieved. The undirected Petri model is established, the path optimization and selection problems are researched relatively, and certain theoretical achievements are obtained.
Compared with the prior art, the invention has the beneficial effects that: in the multi-intelligent-forklift combined operation control system, the model is established, the optimization objective function corresponding to the transportation system is established according to the model, the constraint conditions are established according to the model, the transportation tasks and the operation paths are reasonably distributed respectively, the one-to-one correspondence of the transportation tasks and the task execution vehicles is ensured, the system scheduling without repetition and conflict is ensured, the optimal matching of different operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved.
Drawings
FIG. 1 is a block diagram of the entire unit of a combined operation control system of embodiment 1;
FIG. 2 is a block diagram of a planning unit flow in embodiment 1;
fig. 3 is a block diagram of a decision unit flow in embodiment 1.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1 to 3, the present embodiment provides a multi-intelligent-forklift combined operation control system, which includes a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by a robot; the planning unit is used for planning and predicting a route to be traveled; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
In this embodiment, the sensing unit includes a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; and the map reconstruction module is used for establishing map information according to the information fed back by various sensors.
The sensor module comprises a visual camera system, a three-dimensional laser sensor, a four-line laser sensor and a GPS positioning sensor.
The visual camera system comprises a light source, a lens, a camera, an image acquisition card and a visual processor, wherein the light source illumination is an important factor influencing the input of the machine vision system, the light source illumination directly influences the quality and the application effect of input data, as no universal machine vision illumination equipment exists, a corresponding illumination device is selected for each specific application example to achieve the best effect, the light source can be divided into visible light and invisible light, the common visible light sources are a white light, a fluorescent light, a mercury lamp and a sodium light, on the other hand, the environmental light possibly influences the quality of images, so the influence of the environmental light can be reduced by adopting a method of adding a protective screen, and the illumination system can be divided into the following steps according to the illumination method: the back lighting, the forward lighting, the structured light and the stroboscopic light are used, wherein the back lighting is that the measured object is placed between the light source and the camera, the advantage of the device is that an image with high contrast can be obtained, the forward lighting is that the light source and the camera are positioned at the same side of the measured object, the mode is convenient to install, the structured light lighting is that a grating or a linear light source and the like are projected onto the measured object, the three-dimensional information of the measured object is demodulated according to the distortion generated by the grating or the linear light source, the stroboscopic light lighting is that a high-frequency light pulse is irradiated onto an object, and.
The CCD is a typical solid imaging device, the outstanding characteristic of the CCD is that the charge is used as a signal, and the CCD is different from other devices in that the current or voltage is used as a signal, the imaging device forms a charge packet through photoelectric conversion, and then transfers and amplifies an output image signal under the action of a driving pulse.
The image acquisition card can be PCI or AGP compatible capture card, and can quickly transfer the image to the computer memory for processing.
The three-dimensional laser sensor adopts a three-dimensional laser scanner, mainly comprises a Switzerland leica company, a faro company and a 3ddigital company in the United states, a rigel company in Austria, an optech company in Canada, a Mensi company in France, a Beijing Rong creative technology development company in China and the like, can directly acquire three-dimensional data of various entities or scenes to obtain a sampling point set point cloud on the surface of a measured object, and has the characteristics of quickness, simplicity and accuracy. The three-dimensional model of the target can be rapidly reconstructed based on the data of the point cloud model and the distance image data, and various experimental data such as lines, surfaces, bodies and the like of a three-dimensional space can be obtained, such as surveying and mapping, metering, analysis, simulation, display, monitoring, virtual reality and the like.
Wherein, four-wire laser sensor adopts four-wire laser radar, and laser and microwave belong to the electromagnetic wave, and the derivation of laser radar working distance equation is similar with the derivation of microwave radar, can derive the laser radar equation from the microwave radar working distance equation:
in the formula, PRTo receive laser power, PTFor emitting laser power, GTFor transmit antenna gain, target scattering cross-section, D receive aperture, R lidar target reach, ηAtmη for single pass atmospheric transport coefficientSysIs the transmission coefficient of the optical system of the lidar.
Further, the map reconstruction module is based on an image feature matching algorithm, the purpose of image feature matching is to find a conversion relationship between two frames, and since the extracted features require invariance to environmental changes and the like in a moving environment, currently, commonly used interest point detection operators include: in the embodiment, a SIFT feature point algorithm is adopted, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0,
Specifically, the planning unit adopts an ant colony algorithm to search for the path, and the ant colony algorithm is a bionic algorithm designed for simulating a behavior process that an ant forages the shortest path in the process. When the combined optimization problem is solved by using an ant colony algorithm, firstly, the combined optimization problem is expressed into a standard form related to pheromones, then, each ant independently carries out decision-making construction solution according to local pheromones, and updates surrounding pheromones according to the quality of the solution, and the optimized solution of the combined optimization problem can be solved by repeatedly carrying out the process, wherein the method comprises the following steps:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
As a further improvement of the technical scheme, the formula of the roulette algorithm is as follows:
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
Still further, the algorithm formula for updating the pheromone matrix is as follows:
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
It is worth to be noted that the decision unit always detects the position (x, y) of the path point in the process of path planning based on the shortest path planning algorithm, determines whether the point is in the obstacle, namely whether the collision with the obstacle occurs or not and the detection result is returned to the system, the shortest path planning algorithm is that in a neural network of penalty functions from path points to an obstacle, the excitation functions of middle-layer nodes and top-layer nodes are taken as step functions, each node of the middle layer determines whether the node satisfies its constraint, if yes, the output is 1, otherwise the output is 0, if all the middle points are satisfied, the top level output is 1, which indicates that the point is inside the obstacle, if the middle point detects that at least one of the points does not satisfy the constraint, the top level output is 0, which indicates that the point is outside the obstacle, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1;
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
Furthermore, the formula of the obstacle avoidance algorithm is as follows:
in addition, the formula of the shift algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
it is worth explaining that the execution unit is designed based on AGV scheduling, a certain model is established, an optimization objective function corresponding to the transportation system is established according to the model, constraint conditions are established according to the model, transportation tasks and operation paths are reasonably distributed respectively, one-to-one correspondence of the transportation tasks and task execution vehicles is ensured, no repeated and conflict-free system scheduling is ensured, optimal matching of different AGV operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved. The dispatching management of the automatic checking system comprises the steps of firstly making a checking plan in the material management system, sending a checking instruction to a standby robot according to the plan, automatically driving the robot to each bin, identifying actual inventory information by scanning bin two-dimensional codes, checking the actual inventory information with the existing inventory of a warehouse, recording checking results, and automatically generating a checking report and submitting the report to a platform after checking is finished.
At present, a Petri network model is widely applied to research of AGV scheduling problems, and can accurately describe the dynamic characteristics of a multi-AGV system. However, the Petri net is complex in modeling, complex in operation process and poor in real-time performance. The students of TanakaY and the like establish a time Petri network model and decompose the time Petri network model into a plurality of independent sub-networks, so that the problem that AGV scheduling optimization and paths do not collide is solved. NishiT et al also establish a Petri network model, decompose a complex network model into simple and independent subnets by adopting a certain decomposition method, and obtain a path optimization activation sequence of each AGV by combining a Lagrange relaxation method on the basis. Domestic Liu and the like adopt a coloring Petri network facing space resources to perform modeling analysis, and conflict-free and collision-free operation of the AGV is achieved. The undirected Petri model is established, the path optimization and selection problems are researched relatively, and certain theoretical achievements are obtained.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
1. The utility model provides a many intelligence fork truck joint operation control system which characterized in that: the robot comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by the robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
2. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the sensing unit comprises a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; the map reconstruction module is used for establishing map information according to information fed back by various sensors.
3. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the map reconstruction module is based on an image feature matching algorithm, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0,
4. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the planning unit adopts an ant colony algorithm to solve the path, and the method comprises the following processes:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
5. The combined operation control system for multiple intelligent forklifts according to claim 4, wherein: the formula of the roulette algorithm is as follows:
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
6. The combined operation control system for multiple intelligent forklifts according to claim 4, wherein: the algorithm formula for updating the pheromone matrix is as follows:
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
7. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the decision unit is based on a shortest path planning algorithm, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1;
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
9. the combined operation control system for multiple intelligent forklifts according to claim 7, wherein: the formula of the shift algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
10. the combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the execution unit is designed based on AGV scheduling.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010603736.3A CN111766881A (en) | 2020-06-29 | 2020-06-29 | Multi-intelligent forklift combined operation control system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010603736.3A CN111766881A (en) | 2020-06-29 | 2020-06-29 | Multi-intelligent forklift combined operation control system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111766881A true CN111766881A (en) | 2020-10-13 |
Family
ID=72722876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010603736.3A Pending CN111766881A (en) | 2020-06-29 | 2020-06-29 | Multi-intelligent forklift combined operation control system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111766881A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112506075A (en) * | 2020-11-20 | 2021-03-16 | 重庆交通大学 | TPZN-based intelligent network automobile system cooperative control method |
CN113048973A (en) * | 2021-03-12 | 2021-06-29 | 深圳优艾智合机器人科技有限公司 | Path planning method, path planning device, management system and storage medium |
TWI799051B (en) * | 2022-01-03 | 2023-04-11 | 財團法人工業技術研究院 | Automatic guided vehicle and method for forking pallet |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441736A (en) * | 2007-11-21 | 2009-05-27 | 新乡市起重机厂有限公司 | Path planning method of motor crane robot |
KR101585374B1 (en) * | 2015-04-29 | 2016-01-15 | 중앙대학교 산학협력단 | Apparatus and Method for planning robot path using genetic algorithm |
CN106452903A (en) * | 2016-10-31 | 2017-02-22 | 华南理工大学 | Cloud-aided intelligent warehouse management robot system and method |
CN109828578A (en) * | 2019-02-22 | 2019-05-31 | 南京天创电子技术有限公司 | A kind of instrument crusing robot optimal route planing method based on YOLOv3 |
CN110424075A (en) * | 2019-09-04 | 2019-11-08 | 中国科学院重庆绿色智能技术研究院 | A kind of textile machinery people host computer intelligently doffs control system and method |
-
2020
- 2020-06-29 CN CN202010603736.3A patent/CN111766881A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441736A (en) * | 2007-11-21 | 2009-05-27 | 新乡市起重机厂有限公司 | Path planning method of motor crane robot |
KR101585374B1 (en) * | 2015-04-29 | 2016-01-15 | 중앙대학교 산학협력단 | Apparatus and Method for planning robot path using genetic algorithm |
CN106452903A (en) * | 2016-10-31 | 2017-02-22 | 华南理工大学 | Cloud-aided intelligent warehouse management robot system and method |
CN109828578A (en) * | 2019-02-22 | 2019-05-31 | 南京天创电子技术有限公司 | A kind of instrument crusing robot optimal route planing method based on YOLOv3 |
CN110424075A (en) * | 2019-09-04 | 2019-11-08 | 中国科学院重庆绿色智能技术研究院 | A kind of textile machinery people host computer intelligently doffs control system and method |
Non-Patent Citations (1)
Title |
---|
郭斯羽: "《面向检测的图像处理技术》", 31 August 2015, 湖南大学出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112506075A (en) * | 2020-11-20 | 2021-03-16 | 重庆交通大学 | TPZN-based intelligent network automobile system cooperative control method |
CN112506075B (en) * | 2020-11-20 | 2022-11-22 | 重庆交通大学 | TPZN-based intelligent network automobile system cooperative control method |
CN113048973A (en) * | 2021-03-12 | 2021-06-29 | 深圳优艾智合机器人科技有限公司 | Path planning method, path planning device, management system and storage medium |
CN113048973B (en) * | 2021-03-12 | 2024-04-19 | 深圳优艾智合机器人科技有限公司 | Path planning method, path planning device, management system and storage medium |
TWI799051B (en) * | 2022-01-03 | 2023-04-11 | 財團法人工業技術研究院 | Automatic guided vehicle and method for forking pallet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111766881A (en) | Multi-intelligent forklift combined operation control system | |
Caltagirone et al. | Fast LIDAR-based road detection using fully convolutional neural networks | |
CN110008843B (en) | Vehicle target joint cognition method and system based on point cloud and image data | |
Cunneen et al. | Autonomous vehicles and avoiding the trolley (dilemma): vehicle perception, classification, and the challenges of framing decision ethics | |
US20240027621A1 (en) | Method and device for adjusting parameters of lidar, and lidar | |
US11734935B2 (en) | Transferring synthetic lidar system data to real world domain for autonomous vehicle training applications | |
Shen et al. | Parallel sensing in metaverses: Virtual-real interactive smart systems for “6S” sensing | |
Zhao et al. | Autonomous live working robot navigation with real‐time detection and motion planning system on distribution line | |
CN114359562B (en) | Automatic semantic segmentation and labeling system and method for four-dimensional point cloud | |
Orengo et al. | New developments in drone‐based automated surface survey: towards a functional and effective survey system | |
Mourtzis et al. | Uavs for industrial applications: Identifying challenges and opportunities from the implementation point of view | |
CN113674355A (en) | Target identification and positioning method based on camera and laser radar | |
Poornima et al. | Fog robotics-based intelligence transportation system using line-of-sight intelligent transportation | |
CN109291657A (en) | Laser Jet system is identified based on convolutional neural networks space structure part industry Internet of Things | |
Xiao et al. | A path planning algorithm for PCB surface quality automatic inspection | |
Liu et al. | Visual driving assistance system based on few-shot learning | |
CN115830578B (en) | Article inspection method and device and electronic equipment | |
Politi et al. | Path planning and landing for unmanned aerial vehicles using ai | |
Chavan et al. | Obstacle detection and avoidance for automated vehicle: A review | |
CN209198677U (en) | A kind of laser scanning atmospheric environment Grid Monitoring System | |
Caros et al. | Object segmentation of cluttered airborne lidar point clouds | |
CN114565906A (en) | Obstacle detection method, obstacle detection device, electronic device, and storage medium | |
CN112231430A (en) | Map data management method and device | |
Naso et al. | Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data | |
CN116047537B (en) | Road information generation method and system based on laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201013 |