CN111766881A - Multi-intelligent forklift combined operation control system - Google Patents

Multi-intelligent forklift combined operation control system Download PDF

Info

Publication number
CN111766881A
CN111766881A CN202010603736.3A CN202010603736A CN111766881A CN 111766881 A CN111766881 A CN 111766881A CN 202010603736 A CN202010603736 A CN 202010603736A CN 111766881 A CN111766881 A CN 111766881A
Authority
CN
China
Prior art keywords
algorithm
control system
point
operation control
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010603736.3A
Other languages
Chinese (zh)
Inventor
孙兆臣
杨云燕
董亚倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dali Power Supply Bureau of Yunnan Power Grid Co Ltd
Original Assignee
Dali Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dali Power Supply Bureau of Yunnan Power Grid Co Ltd filed Critical Dali Power Supply Bureau of Yunnan Power Grid Co Ltd
Priority to CN202010603736.3A priority Critical patent/CN111766881A/en
Publication of CN111766881A publication Critical patent/CN111766881A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/07504Accessories, e.g. for towing, charging, locking
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Structural Engineering (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Geology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Civil Engineering (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of forklifts, in particular to a combined operation control system for multiple intelligent forklifts. The robot comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by the robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions. The design of the invention can reasonably distribute the transportation tasks and the operation paths, and realize the optimal matching of different operation constraints and path space resources, thereby achieving the purposes of improving the overall efficiency of the system and reducing the production cost.

Description

Multi-intelligent forklift combined operation control system
Technical Field
The invention relates to the technical field of forklifts, in particular to a combined operation control system for multiple intelligent forklifts.
Background
The material is the most important component of the power grid construction. The quality of equipment materials is the basis of safe and stable operation of a power grid, timely supply of materials is the guarantee of the construction progress of the power grid, the scale of the power grid is continuously enlarged along with the development of social economy, the structure of the power grid is increasingly complex, the requirements on the equipment materials are improved, the difficulty of material management is increasingly high, and the research in the corresponding field of material management is urgently needed to improve the material management level. At present, materials are carried by a forklift mainly operated by manpower, on one hand, the forklift needs workers with certain proficiency, and the workers can not operate continuously in working time, so that more workers are needed to operate in turn, which can increase production cost.
Disclosure of Invention
The invention aims to provide a combined operation control system for multiple intelligent forklifts, which aims to solve the problems in the background technology.
In order to solve the technical problems, the invention aims to provide a combined operation control system for multiple intelligent forklifts, which comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by a robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
As a further improvement of the technical solution, the sensing unit includes a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; the map reconstruction module is used for establishing map information according to information fed back by various sensors.
The sensor module comprises a visual camera system, a three-dimensional laser sensor, a four-line laser sensor and a GPS positioning sensor.
The visual camera system includes light source, lens, camera, image collecting card and vision processor, the light source illumination is the important factor influencing the machine vision system input, it influences the quality and application effect of the input data directly, because there is not general machine vision lighting equipment, so for each specific application example, should choose the corresponding illuminator, in order to reach the best effect, the light source can be divided into visible light and invisible light, several commonly used visible light sources are white lamp, fluorescent lamp, mercury lamp and sodium lamp, on the other hand, the ambient light may influence the quality of the picture, so can adopt the method to add the protective screen to reduce the influence of the ambient light, the lighting system can be divided into according to its irradiation method: the back lighting, the forward lighting, the structured light and the stroboscopic light are used, wherein the back lighting is that the measured object is placed between the light source and the camera, the advantage of the device is that an image with high contrast can be obtained, the forward lighting is that the light source and the camera are positioned at the same side of the measured object, the mode is convenient to install, the structured light lighting is that a grating or a linear light source and the like are projected onto the measured object, the three-dimensional information of the measured object is demodulated according to the distortion generated by the grating or the linear light source, the stroboscopic light lighting is that a high-frequency light pulse is irradiated onto an object, and.
The CCD is a typical solid imaging device, the outstanding characteristic of the CCD is that the charge is used as a signal, and the CCD is different from other devices in that the current or voltage is used as a signal, the imaging device forms a charge packet through photoelectric conversion, and then transfers and amplifies the output image signal under the action of a driving pulse.
The image acquisition card can be PCI or AGP compatible capture card, and can quickly transfer the image to the computer memory for processing.
The three-dimensional laser sensor adopts a three-dimensional laser scanner, mainly comprises a Switzerland leica company, a faro company and a 3ddigital company in the United states, a rigel company in Austria, an optech company in Canada, a Mensi company in France, a Beijing Rong creative industry science and technology development company in China and the like, can directly acquire three-dimensional data of various entities or real scenes to obtain a sampling point set point cloud on the surface of a measured object, and has the characteristics of quickness, simplicity and accuracy. The three-dimensional model of the target can be rapidly reconstructed based on the data of the point cloud model and the distance image data, and various experimental data such as lines, surfaces, bodies and the like of a three-dimensional space can be obtained, such as surveying and mapping, metering, analysis, simulation, display, monitoring, virtual reality and the like.
Four-wire laser sensor adopts four-wire laser radar, and laser and microwave belong to the electromagnetic wave, and the derivation of laser radar working distance equation is similar with the derivation of microwave radar, can derive the laser radar equation from the microwave radar working distance equation:
Figure BDA0002560111850000021
in the formula, PRTo receive laser power, PTFor emitting laser power, GTFor transmit antenna gain, target scattering cross-section, D receive aperture, R lidar target reach, ηAtmη for single pass atmospheric transport coefficientSysIs the transmission coefficient of the optical system of the lidar.
As a further improvement of the technical solution, the map reconstruction module is based on an image feature matching algorithm, and the purpose of image feature matching is to find a conversion relationship between two frames, and since the extracted features require invariance to environmental changes and the like in a motion environment, currently, commonly used interest point detection operators include: in the embodiment, a SIFT feature point algorithm is adopted, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0
Figure BDA0002560111850000031
As a further improvement of the technical scheme, the planning unit adopts an ant colony algorithm to find the path, and the ant colony algorithm is a bionic algorithm designed for simulating a behavior process that an ant forages the shortest path. When the combined optimization problem is solved by using an ant colony algorithm, firstly, the combined optimization problem is expressed into a standard form related to pheromones, then, each ant independently carries out decision-making construction solution according to local pheromones, and updates surrounding pheromones according to the quality of the solution, and the optimized solution of the combined optimization problem can be solved by repeatedly carrying out the process, wherein the method comprises the following steps:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
As a further improvement of the present technical solution, the formula of the wheel disc algorithm is as follows:
Figure BDA0002560111850000041
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
As a further improvement of the present technical solution, the algorithm formula for updating the pheromone matrix is as follows:
Figure BDA0002560111850000042
Figure BDA0002560111850000043
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
As a further improvement of the present solution, the decision unit is based on a shortest path planning algorithm, the position (x, y) of a path point is always detected in the path planning process, whether the point is in an obstacle or not is judged, namely whether the collision with the obstacle occurs or not and the detection result is returned to the system, the shortest path planning algorithm is that in a neural network of penalty functions from path points to an obstacle, the excitation functions of middle-layer nodes and top-layer nodes are taken as step functions, each node of the middle layer determines whether the node satisfies its constraint, if yes, the output is 1, otherwise the output is 0, if all the middle points are satisfied, the top level output is 1, which indicates that the point is inside the obstacle, if the middle point detects that at least one of the points does not satisfy the constraint, the top level output is 0, which indicates that the point is outside the obstacle, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
As a further improvement of the present technical solution, the formula of the obstacle avoidance algorithm is as follows:
Figure BDA0002560111850000051
Figure BDA0002560111850000052
as a further improvement of the present technical solution, the formula of the moving algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
as a further improvement of the technical scheme, the execution unit is designed based on AGV scheduling, a certain model is established, an optimized objective function corresponding to the transportation system is established according to the model, constraint conditions are established according to the model, transportation tasks and operation paths are reasonably distributed respectively, one-to-one correspondence of the transportation tasks and task execution vehicles is ensured, no-repeat and no-conflict system scheduling is ensured, optimal matching of different AGV operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved. The dispatching management of the automatic checking system comprises the steps of firstly making a checking plan in the material management system, sending a checking instruction to a standby robot according to the plan, automatically driving the robot to each bin, identifying actual inventory information by scanning bin two-dimensional codes, checking the actual inventory information with the existing inventory of a warehouse, recording checking results, and automatically generating a checking report and submitting the report to a platform after checking is finished.
At present, a Petri network model is widely applied to research of AGV scheduling problems, and can accurately describe the dynamic characteristics of a multi-AGV system. However, the Petri net is complex in modeling, complex in operation process and poor in real-time performance. The students of TanakaY and the like establish a time Petri network model and decompose the time Petri network model into a plurality of independent sub-networks, so that the problem that AGV scheduling optimization and paths do not collide is solved. NishiT et al also establish a Petri network model, decompose a complex network model into simple and independent subnets by adopting a certain decomposition method, and obtain a path optimization activation sequence of each AGV by combining a Lagrange relaxation method on the basis. Domestic Liu and the like adopt a coloring Petri network facing space resources to perform modeling analysis, and conflict-free and collision-free operation of the AGV is achieved. The undirected Petri model is established, the path optimization and selection problems are researched relatively, and certain theoretical achievements are obtained.
Compared with the prior art, the invention has the beneficial effects that: in the multi-intelligent-forklift combined operation control system, the model is established, the optimization objective function corresponding to the transportation system is established according to the model, the constraint conditions are established according to the model, the transportation tasks and the operation paths are reasonably distributed respectively, the one-to-one correspondence of the transportation tasks and the task execution vehicles is ensured, the system scheduling without repetition and conflict is ensured, the optimal matching of different operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved.
Drawings
FIG. 1 is a block diagram of the entire unit of a combined operation control system of embodiment 1;
FIG. 2 is a block diagram of a planning unit flow in embodiment 1;
fig. 3 is a block diagram of a decision unit flow in embodiment 1.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1 to 3, the present embodiment provides a multi-intelligent-forklift combined operation control system, which includes a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by a robot; the planning unit is used for planning and predicting a route to be traveled; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
In this embodiment, the sensing unit includes a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; and the map reconstruction module is used for establishing map information according to the information fed back by various sensors.
The sensor module comprises a visual camera system, a three-dimensional laser sensor, a four-line laser sensor and a GPS positioning sensor.
The visual camera system comprises a light source, a lens, a camera, an image acquisition card and a visual processor, wherein the light source illumination is an important factor influencing the input of the machine vision system, the light source illumination directly influences the quality and the application effect of input data, as no universal machine vision illumination equipment exists, a corresponding illumination device is selected for each specific application example to achieve the best effect, the light source can be divided into visible light and invisible light, the common visible light sources are a white light, a fluorescent light, a mercury lamp and a sodium light, on the other hand, the environmental light possibly influences the quality of images, so the influence of the environmental light can be reduced by adopting a method of adding a protective screen, and the illumination system can be divided into the following steps according to the illumination method: the back lighting, the forward lighting, the structured light and the stroboscopic light are used, wherein the back lighting is that the measured object is placed between the light source and the camera, the advantage of the device is that an image with high contrast can be obtained, the forward lighting is that the light source and the camera are positioned at the same side of the measured object, the mode is convenient to install, the structured light lighting is that a grating or a linear light source and the like are projected onto the measured object, the three-dimensional information of the measured object is demodulated according to the distortion generated by the grating or the linear light source, the stroboscopic light lighting is that a high-frequency light pulse is irradiated onto an object, and.
The CCD is a typical solid imaging device, the outstanding characteristic of the CCD is that the charge is used as a signal, and the CCD is different from other devices in that the current or voltage is used as a signal, the imaging device forms a charge packet through photoelectric conversion, and then transfers and amplifies an output image signal under the action of a driving pulse.
The image acquisition card can be PCI or AGP compatible capture card, and can quickly transfer the image to the computer memory for processing.
The three-dimensional laser sensor adopts a three-dimensional laser scanner, mainly comprises a Switzerland leica company, a faro company and a 3ddigital company in the United states, a rigel company in Austria, an optech company in Canada, a Mensi company in France, a Beijing Rong creative technology development company in China and the like, can directly acquire three-dimensional data of various entities or scenes to obtain a sampling point set point cloud on the surface of a measured object, and has the characteristics of quickness, simplicity and accuracy. The three-dimensional model of the target can be rapidly reconstructed based on the data of the point cloud model and the distance image data, and various experimental data such as lines, surfaces, bodies and the like of a three-dimensional space can be obtained, such as surveying and mapping, metering, analysis, simulation, display, monitoring, virtual reality and the like.
Wherein, four-wire laser sensor adopts four-wire laser radar, and laser and microwave belong to the electromagnetic wave, and the derivation of laser radar working distance equation is similar with the derivation of microwave radar, can derive the laser radar equation from the microwave radar working distance equation:
Figure BDA0002560111850000071
in the formula, PRTo receive laser power, PTFor emitting laser power, GTFor transmit antenna gain, target scattering cross-section, D receive aperture, R lidar target reach, ηAtmη for single pass atmospheric transport coefficientSysIs the transmission coefficient of the optical system of the lidar.
Further, the map reconstruction module is based on an image feature matching algorithm, the purpose of image feature matching is to find a conversion relationship between two frames, and since the extracted features require invariance to environmental changes and the like in a moving environment, currently, commonly used interest point detection operators include: in the embodiment, a SIFT feature point algorithm is adopted, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0
Figure BDA0002560111850000081
Specifically, the planning unit adopts an ant colony algorithm to search for the path, and the ant colony algorithm is a bionic algorithm designed for simulating a behavior process that an ant forages the shortest path in the process. When the combined optimization problem is solved by using an ant colony algorithm, firstly, the combined optimization problem is expressed into a standard form related to pheromones, then, each ant independently carries out decision-making construction solution according to local pheromones, and updates surrounding pheromones according to the quality of the solution, and the optimized solution of the combined optimization problem can be solved by repeatedly carrying out the process, wherein the method comprises the following steps:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
As a further improvement of the technical scheme, the formula of the roulette algorithm is as follows:
Figure BDA0002560111850000091
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
Still further, the algorithm formula for updating the pheromone matrix is as follows:
Figure BDA0002560111850000092
Figure BDA0002560111850000093
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
It is worth to be noted that the decision unit always detects the position (x, y) of the path point in the process of path planning based on the shortest path planning algorithm, determines whether the point is in the obstacle, namely whether the collision with the obstacle occurs or not and the detection result is returned to the system, the shortest path planning algorithm is that in a neural network of penalty functions from path points to an obstacle, the excitation functions of middle-layer nodes and top-layer nodes are taken as step functions, each node of the middle layer determines whether the node satisfies its constraint, if yes, the output is 1, otherwise the output is 0, if all the middle points are satisfied, the top level output is 1, which indicates that the point is inside the obstacle, if the middle point detects that at least one of the points does not satisfy the constraint, the top level output is 0, which indicates that the point is outside the obstacle, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
Furthermore, the formula of the obstacle avoidance algorithm is as follows:
Figure BDA0002560111850000101
Figure BDA0002560111850000102
in addition, the formula of the shift algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
it is worth explaining that the execution unit is designed based on AGV scheduling, a certain model is established, an optimization objective function corresponding to the transportation system is established according to the model, constraint conditions are established according to the model, transportation tasks and operation paths are reasonably distributed respectively, one-to-one correspondence of the transportation tasks and task execution vehicles is ensured, no repeated and conflict-free system scheduling is ensured, optimal matching of different AGV operation constraints and path space resources is realized, and therefore the purposes of improving the overall efficiency of the system and reducing the production cost are achieved. The dispatching management of the automatic checking system comprises the steps of firstly making a checking plan in the material management system, sending a checking instruction to a standby robot according to the plan, automatically driving the robot to each bin, identifying actual inventory information by scanning bin two-dimensional codes, checking the actual inventory information with the existing inventory of a warehouse, recording checking results, and automatically generating a checking report and submitting the report to a platform after checking is finished.
At present, a Petri network model is widely applied to research of AGV scheduling problems, and can accurately describe the dynamic characteristics of a multi-AGV system. However, the Petri net is complex in modeling, complex in operation process and poor in real-time performance. The students of TanakaY and the like establish a time Petri network model and decompose the time Petri network model into a plurality of independent sub-networks, so that the problem that AGV scheduling optimization and paths do not collide is solved. NishiT et al also establish a Petri network model, decompose a complex network model into simple and independent subnets by adopting a certain decomposition method, and obtain a path optimization activation sequence of each AGV by combining a Lagrange relaxation method on the basis. Domestic Liu and the like adopt a coloring Petri network facing space resources to perform modeling analysis, and conflict-free and collision-free operation of the AGV is achieved. The undirected Petri model is established, the path optimization and selection problems are researched relatively, and certain theoretical achievements are obtained.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. The utility model provides a many intelligence fork truck joint operation control system which characterized in that: the robot comprises a sensing unit, a planning unit, a decision unit and an execution unit, wherein the sensing unit is used for sensing an objective world through various sensors carried by the robot; the planning unit is used for planning and predicting a route to be driven; the decision unit is used for selecting and judging the route through a machine learning algorithm; the execution unit is used for controlling various aspects of equipment and motors of the robot through instructions.
2. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the sensing unit comprises a sensor module, a real-time positioning module and a map reconstruction module; the sensor module is used for sensing the environment through various sensors; the real-time positioning module is used for determining position information in real time; the map reconstruction module is used for establishing map information according to information fed back by various sensors.
3. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the map reconstruction module is based on an image feature matching algorithm, and the algorithm formula is as follows:
D(x,y,kn-1σ)=(G(x,y,knσ)-G(x,y,kn-1σ))*I(x,y);
wherein G (x, y, k)nσ) is a parameter of knGaussian weight function of σ, as shown by knWriting sigma0
Figure FDA0002560111840000011
4. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the planning unit adopts an ant colony algorithm to solve the path, and the method comprises the following processes:
s1.1, establishing a map model, and inputting a map in the field of which a matrix consisting of 0 and 1 represents, wherein 0 represents a passable state, and 1 represents that the area is an obstacle;
s1.2, setting information, inputting an initial pheromone matrix, and selecting position parameters of an initial point and a termination point;
s1.3, selecting nodes which can be reached from the initial point in the next step, solving the probability of going to each node according to the pheromone of each node, and selecting the initial point of the next step by using a wheel disc algorithm;
s1.4, updating the path and the path length;
s1.5, repeating S1.3 and S1.4 until the end point is reached or no path is available;
s1.6, updating the pheromone matrix.
5. The combined operation control system for multiple intelligent forklifts according to claim 4, wherein: the formula of the roulette algorithm is as follows:
Figure FDA0002560111840000021
wherein, tauij(t) is the concentration of pheromones on the arcs (i, j) in the analysis graph, ηijFor the heuristic information associated with the arcs (i, j), α are τ, respectivelyij(t),ηijThe weight parameter of (2).
6. The combined operation control system for multiple intelligent forklifts according to claim 4, wherein: the algorithm formula for updating the pheromone matrix is as follows:
Figure FDA0002560111840000022
Figure FDA0002560111840000023
where ρ is the pheromone volatility coefficient, Q is the information content increase intensity, Lk(t) is the path length.
7. The combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the decision unit is based on a shortest path planning algorithm, and the algorithm flow comprises the following steps:
s2.1, inputting a starting point P (x)1,y1) And target point P (x)N,yN) For t equal to 0, the initial path is generally taken as a point row uniformly distributed on a straight line from the starting point to the target, when x is1≠xNWhen xi=x1+i(xN-x1)/(N-1),yi=(yN-y1)(xi-x1)/(xN-x1)+y1
S2.2, for Path Point P (x)i,yi) N-1 detects with a detector whether it is within an obstacle;
s2.3, if the path point P (x)i,yi) In the obstacle, the robot moves through an obstacle avoidance algorithm; if the path point P (x)i,yi) And not in the obstacle, the vehicle moves through a moving algorithm.
8. The combined operation control system for multiple intelligent forklifts according to claim 7, wherein: the formula of the obstacle avoidance algorithm is as follows:
Figure FDA0002560111840000024
Figure FDA0002560111840000031
9. the combined operation control system for multiple intelligent forklifts according to claim 7, wherein: the formula of the shift algorithm is as follows:
xi=-η2(2xi-xi-1-xi+1);
yi=-η2(2yi-yi-1-yi+1)。
10. the combined operation control system for multiple intelligent forklifts according to claim 1, wherein: the execution unit is designed based on AGV scheduling.
CN202010603736.3A 2020-06-29 2020-06-29 Multi-intelligent forklift combined operation control system Pending CN111766881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010603736.3A CN111766881A (en) 2020-06-29 2020-06-29 Multi-intelligent forklift combined operation control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010603736.3A CN111766881A (en) 2020-06-29 2020-06-29 Multi-intelligent forklift combined operation control system

Publications (1)

Publication Number Publication Date
CN111766881A true CN111766881A (en) 2020-10-13

Family

ID=72722876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010603736.3A Pending CN111766881A (en) 2020-06-29 2020-06-29 Multi-intelligent forklift combined operation control system

Country Status (1)

Country Link
CN (1) CN111766881A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506075A (en) * 2020-11-20 2021-03-16 重庆交通大学 TPZN-based intelligent network automobile system cooperative control method
CN113048973A (en) * 2021-03-12 2021-06-29 深圳优艾智合机器人科技有限公司 Path planning method, path planning device, management system and storage medium
TWI799051B (en) * 2022-01-03 2023-04-11 財團法人工業技術研究院 Automatic guided vehicle and method for forking pallet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441736A (en) * 2007-11-21 2009-05-27 新乡市起重机厂有限公司 Path planning method of motor crane robot
KR101585374B1 (en) * 2015-04-29 2016-01-15 중앙대학교 산학협력단 Apparatus and Method for planning robot path using genetic algorithm
CN106452903A (en) * 2016-10-31 2017-02-22 华南理工大学 Cloud-aided intelligent warehouse management robot system and method
CN109828578A (en) * 2019-02-22 2019-05-31 南京天创电子技术有限公司 A kind of instrument crusing robot optimal route planing method based on YOLOv3
CN110424075A (en) * 2019-09-04 2019-11-08 中国科学院重庆绿色智能技术研究院 A kind of textile machinery people host computer intelligently doffs control system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441736A (en) * 2007-11-21 2009-05-27 新乡市起重机厂有限公司 Path planning method of motor crane robot
KR101585374B1 (en) * 2015-04-29 2016-01-15 중앙대학교 산학협력단 Apparatus and Method for planning robot path using genetic algorithm
CN106452903A (en) * 2016-10-31 2017-02-22 华南理工大学 Cloud-aided intelligent warehouse management robot system and method
CN109828578A (en) * 2019-02-22 2019-05-31 南京天创电子技术有限公司 A kind of instrument crusing robot optimal route planing method based on YOLOv3
CN110424075A (en) * 2019-09-04 2019-11-08 中国科学院重庆绿色智能技术研究院 A kind of textile machinery people host computer intelligently doffs control system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭斯羽: "《面向检测的图像处理技术》", 31 August 2015, 湖南大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506075A (en) * 2020-11-20 2021-03-16 重庆交通大学 TPZN-based intelligent network automobile system cooperative control method
CN112506075B (en) * 2020-11-20 2022-11-22 重庆交通大学 TPZN-based intelligent network automobile system cooperative control method
CN113048973A (en) * 2021-03-12 2021-06-29 深圳优艾智合机器人科技有限公司 Path planning method, path planning device, management system and storage medium
CN113048973B (en) * 2021-03-12 2024-04-19 深圳优艾智合机器人科技有限公司 Path planning method, path planning device, management system and storage medium
TWI799051B (en) * 2022-01-03 2023-04-11 財團法人工業技術研究院 Automatic guided vehicle and method for forking pallet

Similar Documents

Publication Publication Date Title
CN111766881A (en) Multi-intelligent forklift combined operation control system
Caltagirone et al. Fast LIDAR-based road detection using fully convolutional neural networks
CN110008843B (en) Vehicle target joint cognition method and system based on point cloud and image data
Cunneen et al. Autonomous vehicles and avoiding the trolley (dilemma): vehicle perception, classification, and the challenges of framing decision ethics
US20240027621A1 (en) Method and device for adjusting parameters of lidar, and lidar
US11734935B2 (en) Transferring synthetic lidar system data to real world domain for autonomous vehicle training applications
Shen et al. Parallel sensing in metaverses: Virtual-real interactive smart systems for “6S” sensing
Zhao et al. Autonomous live working robot navigation with real‐time detection and motion planning system on distribution line
CN114359562B (en) Automatic semantic segmentation and labeling system and method for four-dimensional point cloud
Orengo et al. New developments in drone‐based automated surface survey: towards a functional and effective survey system
Mourtzis et al. Uavs for industrial applications: Identifying challenges and opportunities from the implementation point of view
CN113674355A (en) Target identification and positioning method based on camera and laser radar
Poornima et al. Fog robotics-based intelligence transportation system using line-of-sight intelligent transportation
CN109291657A (en) Laser Jet system is identified based on convolutional neural networks space structure part industry Internet of Things
Xiao et al. A path planning algorithm for PCB surface quality automatic inspection
Liu et al. Visual driving assistance system based on few-shot learning
CN115830578B (en) Article inspection method and device and electronic equipment
Politi et al. Path planning and landing for unmanned aerial vehicles using ai
Chavan et al. Obstacle detection and avoidance for automated vehicle: A review
CN209198677U (en) A kind of laser scanning atmospheric environment Grid Monitoring System
Caros et al. Object segmentation of cluttered airborne lidar point clouds
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112231430A (en) Map data management method and device
Naso et al. Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data
CN116047537B (en) Road information generation method and system based on laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201013