CN120085649A - Intelligent unmanned forklift system and method based on visual navigation - Google Patents
Intelligent unmanned forklift system and method based on visual navigation Download PDFInfo
- Publication number
- CN120085649A CN120085649A CN202510212936.9A CN202510212936A CN120085649A CN 120085649 A CN120085649 A CN 120085649A CN 202510212936 A CN202510212936 A CN 202510212936A CN 120085649 A CN120085649 A CN 120085649A
- Authority
- CN
- China
- Prior art keywords
- module
- pedestrian
- forklift
- intelligent unmanned
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/40—Control within particular dimensions
- G05D1/43—Control of position or course in two dimensions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/242—Means based on the reflection of waves generated by the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/243—Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/247—Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/60—Intended control result
- G05D1/617—Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
- G05D1/622—Obstacle avoidance
- G05D1/633—Dynamic obstacles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/60—Intended control result
- G05D1/644—Optimisation of travel parameters, e.g. of energy consumption, journey time or distance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/60—Intended control result
- G05D1/648—Performing a task within a working area or space, e.g. cleaning
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Forklifts And Lifting Vehicles (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to the technical field of intelligent unmanned forklifts, in particular to an intelligent unmanned forklifts system and method based on visual navigation, comprising a forklift main body module, a binocular vision module, an image preprocessing unit, a binocular vision distance module, a track prediction module, a pedestrian area detection module, a pedestrian area center calculation module and an obstacle avoidance module, wherein the image preprocessing unit is connected with the binocular vision module, the binocular vision distance module is connected with the image preprocessing unit, the track prediction module is connected with the binocular range prediction module, and the pedestrian rectangular area detection module is connected with the track prediction module, so that the technical problems that in the prior art, when an intelligent unmanned forklift system adopts a single sensor to navigate, in a complex environment, the performance is limited, the positioning accuracy is reduced and even safety accidents are possibly caused are solved.
Description
Technical Field
The invention relates to the technical field of intelligent unmanned forklifts, in particular to an intelligent unmanned forklift system and method based on visual navigation.
Background
Along with the vigorous development of the logistics storage industry, automation and intelligence become important forces for promoting transformation and upgrading of the industry. The intelligent unmanned forklift is used as a key ring of an automatic logistics solution, is increasingly widely applied, and brings unprecedented efficiency improvement for warehouse operation. The unmanned forklift can autonomously finish tasks such as carrying and stacking of cargoes under the unmanned condition, so that the manpower burden is greatly reduced, the operation efficiency is improved, and in the use process of the intelligent unmanned forklift, the navigation system of the intelligent unmanned forklift is vital, and the navigation system mainly depends on sensors such as a laser radar and an ultrasonic sensor.
However, when a single sensor such as a laser radar and an ultrasonic sensor is used for navigation, performance is limited in a complex environment such as light change, shielding existence and the like, so that positioning accuracy is reduced, and even safety accidents can be possibly caused.
Disclosure of Invention
The invention aims to provide an intelligent unmanned forklift system and method based on visual navigation, and aims to solve the technical problems that when the intelligent unmanned forklift system in the prior art adopts a single sensor for navigation, the performance is limited in a complex environment such as light change, the existence of a shielding object and the like, the positioning accuracy is reduced, and even safety accidents are possibly caused.
In order to achieve the above purpose, the intelligent unmanned forklift system based on visual navigation comprises a forklift main body module, a binocular vision module, an image preprocessing unit, a binocular vision distance module, a track prediction module, a pedestrian area detection module, a pedestrian area center calculation module and an obstacle avoidance module, wherein the binocular vision module is arranged on the forklift main body module, the image preprocessing unit is connected with the binocular vision module, the binocular vision distance module is connected with the image preprocessing unit, the track prediction module is connected with the binocular vision distance module, the pedestrian rectangular area detection module is connected with the track prediction module, the pedestrian center calculation module is connected with the pedestrian rectangular area detection module, and the obstacle avoidance module is connected with the pedestrian area center calculation module;
The forklift body module is used as a carrier and used for executing physical tasks such as carrying and stacking;
the binocular vision module is used for providing stereoscopic vision perception and providing basic data for subsequent image processing and ranging;
the image preprocessing unit receives the image information captured by the binocular vision module and performs preprocessing to improve the image quality;
the binocular distance measuring module calculates the accurate distance of objects in the environment by utilizing the binocular vision principle and combining the preprocessed image information;
the track prediction module predicts the motion track of an object in the environment based on the distance information calculated by the binocular distance measurement module, and provides data support for obstacle avoidance decision of the forklift main body module;
the pedestrian region detection module is used for detecting a pedestrian region in the image, marking the pedestrian region as a rectangular frame, and simultaneously transmitting the rectangular frame to the pedestrian region center calculation module to calculate the center point position of the pedestrian region;
The obstacle avoidance module receives the position information of the center point of the pedestrian, judges whether the forklift needs to avoid the pedestrian or not, and formulates a corresponding obstacle avoidance strategy.
The image preprocessing unit comprises a data collection module, a denoising module, a contrast enhancement module, an edge detection module and an image output module, wherein the data collection module is connected with the binocular vision module, the denoising module is connected with the data collection module and the contrast enhancement module, the edge detection module is connected with the contrast enhancement module, and the image output module is connected with the edge detection module.
The intelligent unmanned forklift system based on visual navigation further comprises an environment sensing module, wherein the environment sensing module is also arranged on the forklift main body module;
The environment sensing module adopts an environment sensing sensor, such as a laser radar or an ultrasonic sensor, and works cooperatively with the binocular vision module to further improve the sensing capability of complex environments.
The intelligent unmanned forklift system based on visual navigation further comprises a path planning module, and the path planning module is connected with the obstacle avoidance module.
The pedestrian region center calculating module is based on a GhostNet improved Yolov pedestrian detection algorithm, and meanwhile, the pedestrian region center calculating module is used for carrying out pedestrian region center calculation by combining a target detection and segmentation method.
GhostNet is a lightweight neural network structure, by introducing lightweight convolution operation, the computational complexity of the model is reduced, and meanwhile, the detection precision is maintained. By integrating GhostNet in the Yolov model, pedestrian detection and segmentation are performed using the model.
And then, accurately dividing the detected pedestrian area to calculate the center position of the pedestrian, so that the accuracy and the instantaneity of dynamic obstacle avoidance can be improved, the system can better understand the position and the behavior of the pedestrian in the environment, and a more reasonable obstacle avoidance decision can be made.
The intelligent unmanned forklift system based on visual navigation further comprises an interaction module and an operation end, wherein the interaction module is connected with the obstacle avoidance module, and the operation end is connected with the interaction module.
The intelligent unmanned forklift system based on visual navigation further comprises an adaptive optimization module, and the adaptive optimization module is connected with the path planning module.
The self-adaptive optimization module adopts an improved self-Adaptive Genetic Algorithm (AGA) and is specially used for path planning and scheduling of the unmanned forklift in a complex environment. Through a plurality of technical innovation points, the limitation of the traditional genetic algorithm in practical application is solved, so that the genetic algorithm can be efficiently operated in a multi-target and multi-constraint dynamic environment.
The intelligent unmanned forklift system based on visual navigation further comprises a safety monitoring module, wherein the safety monitoring module is also installed on the forklift main body module;
The monitoring module is used for monitoring the running state and the surrounding environment of the forklift in real time, and immediately giving an alarm and taking corresponding safety measures once an abnormal condition is detected.
The intelligent unmanned forklift system based on visual navigation further comprises a cooperative operation module, and the cooperative operation module is connected with the forklift main body module.
The invention also provides a using method of the intelligent unmanned forklift based on visual navigation, which is applied to the intelligent unmanned forklift system based on visual navigation,
The method comprises the following steps:
utilizing the binocular vision module arranged on the forklift main body module to perform stereoscopic vision perception on the surrounding environment and capturing image information;
The image preprocessing unit receives the image information captured by the binocular vision module, and performs preprocessing to improve the image quality;
the binocular distance measuring module calculates the accurate distance of objects in the environment by utilizing the binocular vision principle and combining the preprocessed image information;
The track prediction module predicts the motion track of an object in the environment based on the distance information calculated by the binocular distance measurement module, meanwhile, the pedestrian area detection module detects the pedestrian area in the image, marks the pedestrian area as a rectangular frame, and simultaneously transmits the rectangular frame to the pedestrian area center calculation module to calculate the center point position of the pedestrian area;
The obstacle avoidance module receives the position information of the center point of the pedestrian, judges whether the forklift needs to avoid the pedestrian, formulates a corresponding obstacle avoidance strategy, and controls the forklift main body module to execute obstacle avoidance operation.
The intelligent unmanned forklift system based on visual navigation and the method thereof are characterized in that when the intelligent unmanned forklift system based on visual navigation is specifically used, the binocular vision module arranged on the forklift main body module is utilized to sense stereoscopic vision of surrounding environment and capture image information, the image preprocessing unit receives the image information captured by the binocular vision module and performs preprocessing, image quality is improved, the binocular vision distance module utilizes binocular vision principle and combines the preprocessed image information to calculate the accurate distance of objects in the environment, the track prediction module predicts the movement track of the objects in the environment based on the distance information calculated by the binocular vision distance module, meanwhile, the pedestrian area detection module detects the pedestrian area in the image and marks the pedestrian area as a rectangular frame, meanwhile, the pedestrian area central point position is calculated by the pedestrian area central calculation module, the obstacle avoidance module receives the pedestrian central point position information, judges whether the pedestrian is required to be avoided or not, and controls the forklift main body module to execute obstacle avoidance operation, and accordingly the problems that when the intelligent unmanned forklift system in the prior art adopts a single sensor to conduct the obstacle avoidance environment, the situation that the complex environment is hidden, the situation of the intelligent unmanned forklift system can be reduced in the prior art, and even the problem that the light is limited in the positioning accuracy can be caused by the fact that the light is changed can be caused.
According to the invention, by introducing the binocular vision module and combining the image preprocessing unit and the binocular distance measuring module, the stereoscopic perception of the surrounding environment can be realized. Compared with a single sensor, the binocular vision system can provide richer depth information and more accurate distance measurement, effectively changes light, and the existence of shielding objects in complex environments, and obviously improves the navigation precision and stability of the forklift in complex scenes.
The pedestrian region detection module and the pedestrian region center calculation module are added, so that the system can detect and track the pedestrian position in real time, and accurately calculate the center position of the pedestrian region. This function is critical to ensuring pedestrian safety, especially in dense or dynamically changing traffic scenarios. By combining the obstacle avoidance module, the system can plan an obstacle avoidance path in advance, effectively avoid potential collision with pedestrians, and greatly improve safety.
The track prediction module predicts the motion track of pedestrians and obstacles and provides a more distant view and a more accurate decision basis for the forklift. The method is favorable for making more reasonable path planning and speed control in a complex environment, reduces sudden stop or detouring caused by emergency and improves the working efficiency.
Through the cooperation of multiple modules, the self-adaptive adjustment of environmental changes is realized. The system can keep higher navigation precision and obstacle avoidance capability through the mutual coordination of the modules no matter the light intensity, the shielding condition or the uncertainty of pedestrian behaviors, and the robustness and the adaptability of the system are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic block diagram of a first embodiment of the present invention.
Fig. 2 is a functional block diagram of a second embodiment of the present invention.
The system comprises a 101-forklift main body module, a 102-binocular vision module, a 103-image preprocessing unit, a 104-binocular range module, a 105-track prediction module, a 106-pedestrian region detection module, a 107-pedestrian region center calculation module, a 108-obstacle avoidance module, a 109-environment perception module, a 110-path planning module, a 111-data collection module, a 112-denoising module, a 113-contrast enhancement module, a 114-edge detection module, a 115-image output module, a 201-interaction module, a 202-operation end, a 203-adaptive optimization module, a 204-safety monitoring module, a 205-collaborative operation module, a 206-energy management module and a 207-login verification module.
Detailed Description
The following detailed description of embodiments of the invention, examples of which are illustrated in the accompanying drawings and, by way of example, are intended to be illustrative, and not to be construed as limiting, of the invention.
The first embodiment of the application is as follows:
Referring to fig. 1, fig. 1 is a schematic block diagram of a first embodiment of the present invention.
The invention provides an intelligent unmanned forklift system based on visual navigation, which comprises a forklift main body module 101, a binocular visual module 102, an image preprocessing unit 103, a binocular visual distance module 104, a track prediction module 105, a pedestrian area detection module 106, a pedestrian area center calculation module 107, an obstacle avoidance module 108, an environment perception module 109 and a path planning module 110, wherein the image preprocessing unit 103 comprises a data collection module 111, a denoising module 112, a contrast enhancement module 113, an edge detection module 114 and an image output module 115.
For the present embodiment, the forklift body module 101 is used as a carrier for performing physical tasks such as handling and stacking;
the binocular vision module 102 is used for providing stereoscopic vision perception and providing basic data for subsequent image processing and ranging;
the image preprocessing unit 103 receives the image information captured by the binocular vision module 102, and performs preprocessing to improve the image quality;
the binocular distance measuring module 104 calculates the accurate distance of objects in the environment by utilizing the binocular vision principle and combining the preprocessed image information;
the track prediction module 105 predicts the motion track of the object in the environment based on the distance information calculated by the binocular distance measurement module 104, and provides data support for the obstacle avoidance decision of the forklift main body module 101;
The pedestrian region detection module 106 is configured to detect a pedestrian region in the image, mark the pedestrian region as a rectangular frame, and transmit the rectangular frame to the pedestrian region center calculation module 107 to calculate a center point position of the pedestrian region;
The obstacle avoidance module 108 receives the position information of the center point of the pedestrian, judges whether the forklift needs to avoid the pedestrian, and formulates a corresponding obstacle avoidance strategy.
The system comprises a fork truck main body module 101, an image preprocessing unit 103, a binocular vision module 102, a track prediction module 105, a pedestrian rectangular area detection module, a pedestrian center calculation module, a pedestrian rectangular area detection module, a obstacle avoidance module 108 and a pedestrian area center calculation module 107, wherein the binocular vision module 102 is arranged on the fork truck main body module 101, the image preprocessing unit 103 is connected with the image preprocessing unit 103, the track prediction module 105 is connected with the binocular vision module 104, the pedestrian rectangular area detection module is connected with the track prediction module 105, the pedestrian center calculation module is connected with the pedestrian rectangular area detection module, and the obstacle avoidance module 108 is connected with the pedestrian area center calculation module 107; the image preprocessing unit 103 receives and preprocesses the image information captured by the binocular vision module 102 to improve the image quality, the binocular vision distance module 104 calculates the accurate distance of objects in the environment by combining the preprocessed image information by utilizing the binocular vision principle, the track prediction module 105 predicts the motion track of the objects in the environment based on the distance information calculated by the binocular vision distance module 104, the pedestrian area detection module 106 detects the pedestrian area in the image and marks the pedestrian area as a rectangular frame and simultaneously transmits the pedestrian area to the pedestrian area center calculation module 107 to calculate the center point position of the pedestrian area, the obstacle avoidance module 108 receives the pedestrian center point position information to judge whether the forklift needs to avoid pedestrians and formulates a corresponding obstacle avoidance strategy to control the forklift main body module 101 to execute obstacle avoidance operation, so that when the intelligent unmanned forklift system in the prior art adopts a single sensor to navigate, in a complex environment, such as light change, the existence of a shielding object and the like, the performance can be limited, so that the positioning accuracy is reduced, and even the technical problem of safety accidents can be caused.
Secondly, the data collection module 111 is connected with the binocular vision module 102, the denoising module 112 is connected with the data collection module 111 and the contrast enhancement module 113, the edge detection module 114 is connected with the contrast enhancement module 113, and the image output module 115 is connected with the edge detection module 114;
The data collection module 111, the denoising module 112, the contrast enhancement module 113, the edge detection module 114 and the image output module 115 inside the image preprocessing unit 103 form a closely connected processing chain. The modules are integrated through a high-efficiency algorithm, so that the rapid circulation and the accurate processing of the image data are realized. The denoising algorithm effectively filters image noise, the contrast enhancement algorithm remarkably improves the visualization degree of image details, and the edge detection algorithm accurately captures key edge information in the image. The high-efficiency integration of the algorithm not only improves the real-time performance of image processing, but also ensures the accuracy of the processing result.
The denoising module 112, the contrast enhancement module 113 and the edge detection module 114 in the present invention all adopt adaptive algorithm design. The denoising algorithm can carry out intelligent adjustment according to the characteristics of image noise, so that effective removal of noise is realized without losing image details. The contrast enhancement algorithm can adaptively adjust the images under different illumination conditions, so that the images can be ensured to show clear details in various environments. The edge detection algorithm also has self-adaptive capability, can accurately identify edge characteristics in an image, and can keep higher detection precision even in a complex or blurred image.
Meanwhile, the environment sensing module 109 is also mounted on the forklift body module 101;
the environment sensing module 109 employs an environment sensing sensor, such as a lidar or an ultrasonic sensor, to cooperate with the binocular vision module 102 to further enhance the sensing capability of complex environments.
In addition, the path planning module 110 is connected to the obstacle avoidance module 108, and the path planning module 110 is responsible for planning an optimal or suboptimal driving path for the forklift according to the current environmental information and task requirements in combination with the obstacle avoidance route of the obstacle avoidance module 108.
The pedestrian region center calculating module 107 is based on GhostNet modified Yolov8 pedestrian detection algorithm, and performs pedestrian region center calculation by combining the target detection and segmentation method.
GhostNet is a lightweight neural network structure, by introducing lightweight convolution operation, the computational complexity of the model is reduced, and meanwhile, the detection precision is maintained. By integrating GhostNet in the Yolov model, pedestrian detection and segmentation are performed using the model.
And then, accurately dividing the detected pedestrian area to calculate the center position of the pedestrian, so that the accuracy and the instantaneity of dynamic obstacle avoidance can be improved, the system can better understand the position and the behavior of the pedestrian in the environment, and a more reasonable obstacle avoidance decision can be made.
When the intelligent unmanned forklift system based on visual navigation is specifically used, the binocular vision module 102 arranged on the forklift main body module 101 is utilized to sense stereoscopic vision of surrounding environment and capture image information, the image preprocessing unit 103 receives the image information captured by the binocular vision module 102 and carries out preprocessing to improve image quality, the binocular vision distance module 104 utilizes binocular vision principles and combines the preprocessed image information to calculate the accurate distance of objects in the environment, the track prediction module 105 predicts the motion track of the objects in the environment based on the distance information calculated by the binocular vision distance module 104, meanwhile, the pedestrian area detection module 106 detects the pedestrian area in the image and marks the pedestrian area as a rectangular frame, and meanwhile, the pedestrian area central point position calculation module 107 calculates the central point position of the pedestrian area, the obstacle avoidance module 108 receives the pedestrian central point position information, judges whether to avoid pedestrians and formulates a corresponding obstacle avoidance strategy, and controls the forklift main body module 101 to execute obstacle avoidance operation, so that the intelligent sensor in the prior art is adopted to avoid the obstacle avoidance operation, and even the problem that the accuracy of the forklift can be reduced when the intelligent sensor in the prior art is adopted, the intelligent unmanned forklift system is used, and the problem of the situation of lowering the accuracy can be solved, such as the situation that the light is caused by the environment is complex, and the problem is solved.
The second embodiment of the application is:
On the basis of the first embodiment, please refer to fig. 2, fig. 2 is a schematic block diagram of a second embodiment of the present invention.
The invention provides an intelligent unmanned forklift system based on visual navigation, which further comprises an interaction module 201, an operation end 202, a self-adaptive optimization module 203, a safety monitoring module 204, a cooperative operation module 205, an energy management module 206 and a login verification module 207.
For this embodiment, the interaction module 201 is connected to the obstacle avoidance module 108, the operation end 202 is connected to the interaction module 201, and the operation end 202 is used for a manager to use, so that the obstacle avoidance module 108 can be managed through the interaction module 201.
Wherein the adaptive optimization module 203 is connected with the path planning module 110.
The adaptive optimization module 203 adopts an improved Adaptive Genetic Algorithm (AGA) and is specially used for path planning and scheduling of the unmanned forklift in a complex environment. Through a plurality of technical innovation points, the limitation of the traditional genetic algorithm in practical application is solved, so that the genetic algorithm can be efficiently operated in a multi-target and multi-constraint dynamic environment.
Second, the safety monitoring module 204 is also mounted on the forklift body module 101;
The monitoring module is used for monitoring the running state and the surrounding environment of the forklift in real time, and immediately giving an alarm and taking corresponding safety measures once an abnormal condition is detected.
Again, the cooperative operation module 205 is connected with the forklift main body module 101, and the cooperative control module is used for being connected with other forklifts, so as to realize cooperative operation, and further promote obstacle avoidance and path planning.
Finally, the energy management module 206 is connected to the forklift main body module 101, the verification login module is implanted in the operation end 202, the energy consumption of the forklift main body module 101 can be managed by setting the energy management module 206, and the identity of a manager logging in the operation end 202 can be verified by the verification login module.
The intelligent unmanned forklift system based on visual navigation in this embodiment is used, the operation end 202 is used for a manager to use, the obstacle avoidance module 108 can be managed through the interaction module 201, and when the intelligent unmanned forklift system is specifically used, the adaptive optimization module 203 adopts an improved Adaptive Genetic Algorithm (AGA) and is specially used for path planning and scheduling of an unmanned forklift in a complex environment. Through a plurality of technical innovation points, the limitation of the traditional genetic algorithm in practical application is solved, the traditional genetic algorithm can operate efficiently in a multi-target and multi-constraint dynamic environment, the energy consumption of the forklift main body module 101 can be managed through the energy management module 206, and the identity of a manager logging in the operation end 202 can be verified through the verification logging module.
The invention also provides a using method of the intelligent unmanned forklift based on visual navigation, which is applied to the intelligent unmanned forklift system based on visual navigation,
The method comprises the following steps:
Utilizing the binocular vision module 102 installed on the forklift main body module 101 to perform stereoscopic vision perception on the surrounding environment, and capturing image information;
the image preprocessing unit 103 receives the image information captured by the binocular vision module 102, and performs preprocessing to improve the image quality;
the binocular distance measuring module 104 calculates the accurate distance of objects in the environment by utilizing the binocular vision principle and combining the preprocessed image information;
The track prediction module 105 predicts the motion track of an object in the environment based on the distance information calculated by the binocular distance measurement module 104, meanwhile, the pedestrian area detection module 106 detects the pedestrian area in the image, marks the pedestrian area as a rectangular frame, and simultaneously transmits the rectangular frame to the pedestrian area center calculation module 107 to calculate the center point position of the pedestrian area;
The obstacle avoidance module 108 receives the position information of the center point of the pedestrian, determines whether the forklift needs to avoid the pedestrian, and makes a corresponding obstacle avoidance strategy to control the forklift main body module 101 to execute the obstacle avoidance operation.
By introducing the binocular vision module 102 and combining the image preprocessing unit 103 and the binocular distance module 104, the stereoscopic perception of the surrounding environment can be realized. Compared with a single sensor, the binocular vision system can provide richer depth information and more accurate distance measurement, effectively changes light, and the existence of shielding objects in complex environments, and obviously improves the navigation precision and stability of the forklift in complex scenes.
The addition of the pedestrian area detection module 106 and the pedestrian area center calculation module 107 enables the system to detect and track the pedestrian position in real time, and accurately calculate the center position of the pedestrian area. This function is critical to ensuring pedestrian safety, especially in dense or dynamically changing traffic scenarios. In combination with the obstacle avoidance module 108, the system can plan an obstacle avoidance path in advance, effectively avoid potential collision with pedestrians, and greatly improve safety.
The track prediction module 105 provides a more distant view and a more accurate decision basis for the forklift by predicting the motion track of pedestrians and obstacles. The method is favorable for making more reasonable path planning and speed control in a complex environment, reduces sudden stop or detouring caused by emergency and improves the working efficiency.
Through the cooperation of multiple modules, the self-adaptive adjustment of environmental changes is realized. The system can keep higher navigation precision and obstacle avoidance capability through the mutual coordination of the modules no matter the light intensity, the shielding condition or the uncertainty of pedestrian behaviors, and the robustness and the adaptability of the system are enhanced.
The above disclosure is only a preferred embodiment of the present invention, and it should be understood that the scope of the invention is not limited thereto, and those skilled in the art will appreciate that all or part of the procedures described above can be performed according to the equivalent changes of the claims, and still fall within the scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510212936.9A CN120085649A (en) | 2025-02-26 | 2025-02-26 | Intelligent unmanned forklift system and method based on visual navigation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510212936.9A CN120085649A (en) | 2025-02-26 | 2025-02-26 | Intelligent unmanned forklift system and method based on visual navigation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120085649A true CN120085649A (en) | 2025-06-03 |
Family
ID=95848509
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510212936.9A Pending CN120085649A (en) | 2025-02-26 | 2025-02-26 | Intelligent unmanned forklift system and method based on visual navigation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120085649A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120431763A (en) * | 2025-07-07 | 2025-08-05 | 中国矿业大学(北京) | Forklift and pedestrian trajectory prediction and collision warning method based on multi-camera fusion |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN202362833U (en) * | 2011-12-08 | 2012-08-01 | 长安大学 | Binocular stereo vision-based three-dimensional reconstruction device of moving vehicle |
| CN113421210A (en) * | 2021-07-21 | 2021-09-21 | 东莞市中科三尾鱼智能科技有限公司 | Surface point cloud reconstruction method based on binocular stereo vision |
| CN114550138A (en) * | 2022-02-24 | 2022-05-27 | 江苏自然数智能科技有限公司 | Fork truck collision avoidance system based on binocular vision |
| CN119079886A (en) * | 2024-08-29 | 2024-12-06 | 昆山源之正智能科技有限公司 | Fork-type mobile robot system based on multi-sensor fusion SLAM technology |
| CN119200588A (en) * | 2024-09-11 | 2024-12-27 | 舟山太航智能科技有限公司 | A method for automatic berthing and unberthing of ships based on video surveillance and image recognition |
-
2025
- 2025-02-26 CN CN202510212936.9A patent/CN120085649A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN202362833U (en) * | 2011-12-08 | 2012-08-01 | 长安大学 | Binocular stereo vision-based three-dimensional reconstruction device of moving vehicle |
| CN113421210A (en) * | 2021-07-21 | 2021-09-21 | 东莞市中科三尾鱼智能科技有限公司 | Surface point cloud reconstruction method based on binocular stereo vision |
| CN114550138A (en) * | 2022-02-24 | 2022-05-27 | 江苏自然数智能科技有限公司 | Fork truck collision avoidance system based on binocular vision |
| CN119079886A (en) * | 2024-08-29 | 2024-12-06 | 昆山源之正智能科技有限公司 | Fork-type mobile robot system based on multi-sensor fusion SLAM technology |
| CN119200588A (en) * | 2024-09-11 | 2024-12-27 | 舟山太航智能科技有限公司 | A method for automatic berthing and unberthing of ships based on video surveillance and image recognition |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120431763A (en) * | 2025-07-07 | 2025-08-05 | 中国矿业大学(北京) | Forklift and pedestrian trajectory prediction and collision warning method based on multi-camera fusion |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR20210020945A (en) | Vehicle tracking in warehouse environments | |
| CN112540606B (en) | Obstacle avoidance method and device, scheduling server and storage medium | |
| CN112101128A (en) | A perception planning method for unmanned formula racing car based on multi-sensor information fusion | |
| KR20250133915A (en) | Path planning method and device, and crane | |
| CN111198496A (en) | Target following robot and following method | |
| Neto et al. | Real-time estimation of drivable image area based on monocular vision | |
| CN114084129A (en) | Fusion-based vehicle automatic driving control method and system | |
| CN115129063A (en) | A field work robot headland steering navigation system and navigation method | |
| CN120085649A (en) | Intelligent unmanned forklift system and method based on visual navigation | |
| CN119152490A (en) | Three-dimensional perception method for intelligent vehicle dynamic target in complex environment | |
| CN113158779B (en) | Walking method, walking device and computer storage medium | |
| JP2025072732A (en) | Autonomous Driving System | |
| CN116991104A (en) | Automatic driving device for unmanned vehicle | |
| CN120307287A (en) | A smart orchard robot and automatic detection and picking data processing method | |
| CN120669711A (en) | Automatic return method for multi-sensor data fusion of unmanned vehicle | |
| Chavan et al. | Obstacle detection and avoidance for automated vehicle: A review | |
| CN120190815A (en) | An AI-driven robot working environment safety perception method | |
| CN112306064A (en) | RGV control system and method for binocular vision identification | |
| CN120039807A (en) | Intelligent forklift management system and method based on AI anti-collision | |
| CN110913335B (en) | Automatic guided vehicle perception and positioning method, device, server and automatic guided vehicle | |
| CN118024242A (en) | Robot and positioning method thereof | |
| CN118331282A (en) | Barrier avoiding method, device and system for desert tree planting robot | |
| CN118545081A (en) | Lane departure warning method and system | |
| KR102546156B1 (en) | Autonomous logistics transport robot | |
| CN116540696A (en) | A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |