CN110223212B - Dispatching control method and system for transport robot - Google Patents

Dispatching control method and system for transport robot Download PDF

Info

Publication number
CN110223212B
CN110223212B CN201910533614.9A CN201910533614A CN110223212B CN 110223212 B CN110223212 B CN 110223212B CN 201910533614 A CN201910533614 A CN 201910533614A CN 110223212 B CN110223212 B CN 110223212B
Authority
CN
China
Prior art keywords
target
conveying device
state information
transmission device
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910533614.9A
Other languages
Chinese (zh)
Other versions
CN110223212A (en
Inventor
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Noah Wood Robot Technology Co ltd
Original Assignee
Shanghai Zhihuilin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhihuilin Medical Technology Co ltd filed Critical Shanghai Zhihuilin Medical Technology Co ltd
Priority to CN201910533614.9A priority Critical patent/CN110223212B/en
Publication of CN110223212A publication Critical patent/CN110223212A/en
Application granted granted Critical
Publication of CN110223212B publication Critical patent/CN110223212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a dispatching control method and a system of a transport robot, wherein the method comprises the following steps: detecting the working state of the body to obtain body state information; shooting and acquiring a target image, wherein the target image comprises a conveying device and an indicator light which is arranged at the conveying device and is used for indicating the working state of the conveying device; analyzing the target image to obtain the working state information of the conveying device; analyzing and obtaining a target transmission device butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device; acquiring the spatial position of a target conveying device, and navigating and moving to the position of the target conveying device; and docking with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods. The invention can complete transportation scheduling without scheduling party participation in the whole process, reduces the delay of transportation tasks caused by the scheduling party participation, further improves the docking efficiency and the cargo transportation efficiency, and improves the application popularization rate.

Description

Dispatching control method and system for transport robot
Technical Field
The invention relates to the technical field of transportation control, in particular to a dispatching control method and a dispatching control system for a transportation robot.
Background
The current logistics transportation system is various in types, and various logistics transportation systems respectively have advantages. Among them, with the progress and development of artificial intelligence technology, the adoption of a transport robot for logistics transportation is gaining more and more favor.
The problem that current logistics transportation system exists is, when carrying out the butt joint between current transport robot and the conveyer, need dispatch parties such as server to participate in unified dispatch transportation, the dispatch party is unified to monitor each transport robot and each conveyer's state information to according to which conveyer butt joint of state information dispatch control current transport robot, until the goods transport on the conveyer completely, the dispatch party again control current transport robot leaves rather than the conveyer of butt joint, whole logistics transportation process whole journey needs dispatch party to participate in.
How to solve the problem that in the prior art, transportation can be completed only by a scheduling party and scheduling, so that the cargo transportation efficiency and the docking efficiency are low, and the application popularization rate is reduced is urgently needed to be solved.
Disclosure of Invention
The invention aims to provide a dispatching control method and a dispatching control system for a transport robot, which can complete transportation dispatching without dispatching party participation in the whole process, reduce the delay of transportation tasks caused by the dispatching party participation, further improve the butt joint efficiency and the cargo transportation efficiency, and improve the application popularization rate.
The technical scheme provided by the invention is as follows:
the invention provides a dispatching control method of a transport robot, which comprises the following steps:
detecting the working state of the body to obtain body state information;
shooting and acquiring a target image, wherein the target image comprises a conveying device and an indicator light which is arranged at the conveying device and is used for indicating the working state of the conveying device;
analyzing the target image to obtain the working state information of the conveying device;
analyzing and obtaining a target transmission device which is butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device;
acquiring the spatial position of the target conveying device, and navigating and moving to the position of the target conveying device;
and docking with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods.
Further, the analyzing the target image to obtain the working state information of the transmission device specifically includes:
carrying out indicator lamp state identification on the target image by using a preset neural network model through a visual detection algorithm;
analyzing and obtaining the working state information of the conveying device according to the state identification result of the indicator light;
The indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
Further, after analyzing the target image to obtain the working state information of the conveying device, according to the body state information and the working state information of the conveying device, before analyzing and obtaining the target conveying device butted with the target conveying device and the conveying task type of the target conveying device, the method comprises the following steps:
and the working state information of the conveying device obtained by respective analysis is transmitted and shared with other conveying robots.
Further, the analyzing and obtaining the target transmission device butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device specifically comprises the following steps:
judging whether the body is in a state to be loaded or unloaded according to the body state information;
when the self is in a to-be-loaded state, determining that a conveying device corresponding to the state of the working state information to be sent is a target conveying device, and determining that the type of the conveying task is a cargo receiving type;
and when the self is in a state to be unloaded, determining that the conveyer corresponding to the state of waiting for receiving the working state information is a target conveyer, and determining that the type of the conveying task is a cargo putting type.
Further, the method also comprises the following steps:
when at least two candidate conveying devices matched with the working states of the self-body and the working state information are obtained through analysis according to the self-body state information and the working state information, the distance value between the self-body and each candidate conveying device is calculated;
and comparing the sizes of all the distance values, and determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
Further, the obtaining the spatial position of the target transmission device and navigating to the position of the target transmission device specifically includes:
detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
calculating to obtain a first spatial position of the target conveying device according to the size information of the target conveying device;
and navigating and moving to the position of the target conveying device according to the first space position.
Further, the obtaining the spatial position of the target transmission device and navigating to the position of the target transmission device specifically includes:
Emitting detection laser to the support legs of the target conveying device, and acquiring the laser coordinate of each support leg on a laser coordinate system;
calculating to obtain a second space position of the target conveying device according to the laser coordinates;
and navigating and moving to the position of the target conveying device according to the second space position.
The present invention also provides a scheduling control system of a transport robot, comprising: a plurality of transport robots and transport devices; the conveying device is provided with an indicator light for indicating the working state of the conveying device; each of the transport robots includes: the device comprises an image acquisition module, a processing module, a detection module, an analysis module, a control module and an execution module;
the detection module is used for detecting the working state of the detection module to obtain the body state information;
the image acquisition module is used for shooting and acquiring a target image, and the target image comprises a conveying device and an indicator lamp arranged at the conveying device;
the processing module is connected with the image acquisition module and used for analyzing the target image to obtain the working state information of the transmission device;
the analysis module is respectively connected with the processing module and the detection module and is used for analyzing and obtaining a target transmission device which is butted with the analysis module and the type of the transmission task of the analysis module according to the body state information and the working state information of the transmission device;
The control module is connected with the analysis module and used for acquiring the spatial position of the target conveying device and navigating and moving to the position of the target conveying device;
and the execution module is connected with the analysis module and is used for butting with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods after moving to the position of the target conveying device.
Further, the processing module comprises: a first image recognition unit and a first processing unit;
the first image recognition unit is used for carrying out indicator lamp state recognition on the target image by using a preset neural network model through a visual detection algorithm;
the first processing unit is connected with the first image recognition unit and used for analyzing and obtaining the working state information of the conveying device according to the state recognition result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
Further, each of the transport robots further includes: a wireless communication module;
And the wireless communication module is connected with the wireless communication modules of the other transport robots and is used for transmitting and sharing the working state information of the transmission device obtained by respective analysis.
Further, the analysis module comprises: a judging unit and a first determining unit;
the judging unit is used for judging whether the body is in a state to be loaded or unloaded according to the body state information;
the first determining unit is connected with the judging unit and used for determining that the conveying device corresponding to the state of the working state information to be sent is a target conveying device and determining that the type of the conveying task is a cargo receiving type when the first determining unit is in a to-be-loaded state; and when the self is in a state to be unloaded, determining that the conveyer corresponding to the state of waiting for receiving the working state information is a target conveyer, and determining that the type of the conveying task is a cargo putting type.
Further, the analysis module further comprises: a second processing unit and a second determining unit;
the second processing unit is used for calculating a distance value between the second processing unit and each candidate transmission device and comparing the size of all the distance values when at least two candidate transmission devices matched with the working state of the second processing unit are obtained through analysis according to the body state information and the working state information;
The second determining unit is connected with the second processing unit and is used for determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
Further, the control module includes: the second image recognition unit, the third processing unit and the first navigation moving unit;
the second image identification unit is used for detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
the third processing unit is connected with the second image recognition unit and used for calculating and obtaining a first spatial position of the target conveying device according to the size information of the target conveying device;
and the first navigation moving unit is connected with the third processing unit and used for navigating and moving to the position of the target conveying device according to the first space position.
Further, the control module includes: the laser detection unit, the fourth processing unit and the second navigation moving unit;
the laser detection unit is used for emitting detection laser to the support legs of the target conveying device and acquiring the laser coordinate of each support leg on a laser coordinate system;
The fourth processing unit is connected with the laser detection unit and used for calculating a second spatial position of the target conveying device according to the laser coordinates;
and the second navigation moving unit is connected with the fourth processing unit and used for navigating and moving to the position of the target conveying device according to the second space position.
By the dispatching control method and the dispatching control system for the transport robot, the transport dispatching can be completed without the dispatching party in the whole process, the delay of the transport task caused by the dispatching party is reduced, the butt joint efficiency and the cargo transport efficiency are improved, and the application popularization rate is improved.
Drawings
The above features, technical features, advantages and implementations of a method and system for controlling dispatch of a transfer robot will be further described in the following detailed description of preferred embodiments in a clearly understandable manner with reference to the accompanying drawings.
Fig. 1 is a flowchart of one embodiment of a scheduling control method of a transport robot of the present invention;
FIG. 2 is a schematic view of the structure of the transfer device of the present invention;
FIG. 3 is a schematic diagram of a laser coordinate system and a world coordinate system according to the present invention;
Fig. 4 is a schematic structural diagram of an embodiment of a dispatch control system of a transport robot according to the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
In an embodiment of the present invention, as shown in fig. 1 and 2, a scheduling control method of a transport robot 1 includes:
S100, detecting the working state of the user to obtain body state information;
s200, shooting and acquiring a target image, wherein the target image comprises a conveying device 2 and an indicator light 21 which is arranged at the conveying device 2 and is used for indicating the working state of the conveying device 2;
s300, analyzing the target image to obtain the working state information of the conveying device 2;
specifically, the transportation robot 1 queries the working state of itself to obtain the body state information, the body state information of the transportation robot 1 includes a waiting state where the transportation robot is idle and has a waiting state for the transportation device to unload the goods to the transportation robot 1, a waiting state where the transportation robot is idle and has a waiting state for the transportation robot 1 to unload the goods to the transportation device, and a goods loading executing state where the transportation robot 1 is in a butt joint with the transportation device to load the goods, or a goods unloading executing state where the transportation robot 1 is in a butt joint with the transportation device to unload the goods. In addition, the ontology status information may further include available resource information obtained by analyzing the system resources (the system resources include power resources, CPU resources, and the like), that is, remaining power information, remaining CPU information, and the like.
The transportation robot 1 is provided with an image acquisition module 11, and the image acquisition module 11 is arranged at any fixed position on the front side of the body of the transportation robot 1, so that the image acquisition module 11 can shoot images in the shooting range on the front side of the body of the transportation robot 1. The image acquisition module 11 includes a camera, a depth camera, and the like. The transport robot 1 controls the image capture module 11 to capture a target image including the conveyor 2 and the indicator light 21 provided at the conveyor 2 and indicating the operating state of the conveyor 2. Lens adjustment is needed before the target image is obtained through formal shooting, so that the influence of a virtual focus phenomenon on the quality of the collected target image is avoided. The target image may be a picture, or may be an image frame obtained by performing shot segmentation on a video. After acquiring a target image, the transportation robot 1 performs image preprocessing on the target image, where the image preprocessing includes graying, binarization, filtering, and the like, and the image preprocessing process is not described herein in detail for the prior art. And analyzing the target image subjected to image preprocessing to obtain the working state information of each conveying device 2. The working state information of the transmission device includes that the transmission device 2 is idle and has a to-be-received state waiting for the transportation robot 1 to unload the goods to the transmission device, the transmission device 2 is idle and has a to-be-sent state waiting for the transmission device to unload the goods to the transportation robot 1, and the transmission device is in a goods loading execution state in which the transmission device is in butt joint with the transportation robot 1 to load the goods, or the transmission device is in a goods unloading execution state in which the transmission device is in butt joint with the transportation robot 1 to unload the goods.
S400, analyzing and obtaining the target transmission device 2 butted with the transmission device and the transmission task type of the transmission device according to the body state information and the working state information of the transmission device;
s500, acquiring the spatial position of the target conveying device 2, and navigating and moving to the position of the target conveying device 2;
and S600, butting with the target conveying device 2 according to the type of the conveying task to finish loading or unloading the goods.
Specifically, the transportation robot 1 analyzes the acquired body state information and the work state information obtained by analysis to obtain the transportation task type of each transportation robot 1 and the target transfer device 2, which is the docking target of each transportation robot 1. Then the transportation robot 1 positions and obtains position information of the transportation robot 1, obtains a space position of the target conveying device 2, autonomously performs path planning according to the position information and the space position to generate a moving route, then moves to the position of the target conveying device 2 according to the generated moving route, and after the transportation robot 1 reaches the position of the target conveying device 2, performs information interaction with the target conveying device 2, namely the transportation robot 1 gives a trigger signal of loading and unloading readiness to the target conveying device 2, so that the transportation robot 1 and the target conveying device 2 are mutually butted, and the cargo loading butt joint operation or the cargo unloading butt joint operation between the transportation robot 1 and the target conveying device 2 is completed.
In this embodiment, it needs to determine whether the transportation robot 1 receives goods from the transportation device 2 or delivers goods to the transportation device 2, and therefore each transportation robot 1 needs to obtain its own body state information, because the transportation device 2 may be in a goods delivery state or a goods receiving state, and there may be a plurality of transportation devices 2 in application scenes such as hospitals, logistics warehouses, supermarkets or libraries, and the placement positions of a plurality of transportation devices 2 may be concentrated, it needs to determine and analyze to obtain the working state information of each transportation device 2, then according to the body state information of the transportation robot 1 and the working state information of the transportation devices 2, the transportation robot 1 itself matches and searches for the target transportation device 2 docked with itself, and further formulates and generates a corresponding goods transportation task, and locates the target transportation device 2 docked with itself to obtain the spatial position of the target transportation device 2, and then the target conveyor 2 is navigated to perform the loading and unloading operation (including the cargo loading and docking operation or the cargo unloading and docking operation). According to the invention, the transport robot 1 and the conveying device 2 are organically linked, the butt joint object is autonomously determined through linkage, and the cargo is automatically navigated and moved to the destination to complete the loading and unloading operation of the cargo, so that mutual complementation is realized, the overall efficiency is improved, the wider application is achieved, the butt joint problem between the transport robot 1 and the conveying device 2 is effectively solved, and the transport scheduling can be completed without the need of a scheduling party in the whole process, so that the delay of the transport task caused by the scheduling party is reduced, the butt joint efficiency and the cargo conveying efficiency are further improved, and the application popularization rate is improved.
Based on the foregoing embodiment, the step S200 of analyzing the target image to obtain the operating state information of the transmission device 2 specifically includes the steps of:
s210, carrying out indicator lamp state identification on a target image by using a preset neural network model through a visual detection algorithm;
s220, analyzing and obtaining the working state information of the conveying device 2 according to the state identification result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicator light shape and color state comprises an indicator light color state and an indicator light shape state.
Specifically, after the transportation robot 1 acquires a target image in real time, the target image is preprocessed, and the status of the indicator lamp is recognized at a local part of the preprocessed target image, which includes the indicator lamp 21, so that a status recognition result of the indicator lamp is obtained. Preferably, the external environment has a great influence on the color of the indicator light 21, and even the color of the indicator light 21 may be filtered as noise during image preprocessing, so that the indicator light 21 indicates the working state of the conveying device 2 at the current moment through different color states, and can be effectively complemented by combining the shape of the indicator light 21, effectively eliminate the interference of external factors such as illumination and the like and the color attenuation of the indicator light 21 on the color of the indicator light 21, thereby realizing that the working state information of the conveying device 2 can be accurately identified and obtained under the influence of various external and internal factors, further improving the accuracy of the conveying robot 1 in finding out the target conveying device 2 matched with the working state of the conveying robot according to more accurate and reliable working state information and combining the body state information, and improving the accuracy of the automatic scheduling of the moving robot to complete cargo docking, because the accuracy of goods butt joint improves, and then reduces the probability of wrong butt joint loading and unloading operation to indirect promotion butt joint efficiency and freight efficiency. In general, the indicator color status includes red, green, yellow, blue, etc., the indicator shape status includes circle, square, triangle, etc., and the indicator on/off status includes an on status of the indicator 21 and an off status of the indicator 21. The state recognition of the indicator lamp is performed through an existing visual detection algorithm, for example, a preset neural network model is based on fast-rcnn, a mobilenetv2 is adopted in a back-end network, the indicator lamp 21 in the target image is recognized and positioned through the preset neural network model, and then the state recognition of the indicator lamp is performed. For example, the preset neural network model is based on R-CNN (or SPP-NET, Fast R-CNN, YOLO, SSD), the backend network adopts mobilenetv2 (or mobilenetv1), the indicator light 21 in the target image is identified and positioned through the preset neural network model, and then the status of the indicator light is identified.
In this embodiment, through the visual detection algorithm, can utilize and predetermine neural network model and carry out the location discernment of pilot lamp 21, and the discernment of pilot lamp state obtains pilot lamp state recognition result, and then effectively, fast, accurate, discern the pilot lamp state recognition result high-efficiently through predetermineeing neural network model, it is better to detect the recognition effect, be convenient for follow-up transport robot 1 obtains conveyer 2's operating condition information according to pilot lamp state recognition result analysis, and then make transport robot 1 according to body state information and operating condition information, match by oneself and seek target conveyer 2, and then formulate and generate the corresponding freight task, accurately, accomplish goods loading operation or uninstallation operation reliably. The transport robot 1 moves towards the position of the target conveying device 2, and when the transport robot 1 reaches the position of the target conveying device 2, the transport robot and the target conveying device are in butt joint with each other, and cargo loading operation or cargo unloading operation is completed, manual participation is not needed, the cost is reduced, and the cargo conveying efficiency is improved.
In an embodiment of the present invention, a scheduling control method of a transport robot 1 includes:
S100, detecting the working state of the user to obtain body state information;
s200, shooting and acquiring a target image, wherein the target image comprises a conveying device 2 and an indicator light 21 which is arranged at the conveying device 2 and is used for indicating the working state of the conveying device 2;
s300, analyzing the target image to obtain the working state information of the conveying device 2;
s301, the working state information of the conveying device 2 obtained by respective analysis is transmitted and shared with the other conveying robots 1;
s400, analyzing and obtaining the target transmission device 2 butted with the transmission device and the transmission task type of the transmission device according to the body state information and the working state information of the transmission device;
s500, acquiring the spatial position of the target conveying device 2, and navigating and moving to the position of the target conveying device 2;
and S600, butting with the target conveying device 2 according to the type of the conveying task to finish loading or unloading the goods.
Specifically, the same parts as those in the above embodiments are not described in detail herein. Compared with the embodiment, each robot performs information interaction with other robots except the robot, so that the work state information of the transmission device 2 obtained by respective analysis is shared with the other robots, the work state information obtained by respective sharing is shared between the transportation robot 1 and the transportation robot 1 in a preset scene area, once the work state information sharing between the transportation robots 1 is achieved, the number of target images of all the transmission devices 2 and the indicator lamps 21 around the transportation robot 1 can be reduced, the probability of repeatedly recognizing the work state information of the same transmission device 2 obtained by analysis at the same time is reduced, the invalid workload is reduced, and the system resource waste caused by the invalid operation is reduced. In addition, due to the sharing of the working state information among the transport robots 1, the tracking of the real-time working state information of all the conveying devices 2 in a wide application scene is expanded, the blind spot rate of each transport robot 1 to the working state information is reduced, the docking success rate of each transport robot 1 and each conveying device 2 in the application scene is further improved, the idle rate of the transport robots 1 and the conveying devices 2 in the whole system is reduced, and the docking efficiency and the cargo conveying efficiency are further indirectly improved.
Based on the foregoing embodiment, the step S400 of analyzing the target transport device 2 docked with itself and the type of the transport task of itself according to the body state information and the working state information of the transport device specifically includes the steps of:
s410, judging whether the self is in a state to be loaded or unloaded according to the body state information;
s420, when the self is in a to-be-loaded state, determining that the conveyer 2 corresponding to the state with the working state information to be sent is a target conveyer 2, and determining that the type of the conveying task is a cargo receiving type;
s430, when the self is in the state to be unloaded, determining that the conveyor 2 corresponding to the state in which the working status information is to be received is the target conveyor 2, and determining that the type of the transportation task is the cargo delivery type.
Specifically, the transportation robot 1 performs analysis and judgment according to the self body state information, judges whether the self body is in a to-be-loaded state at the current moment, stops the judgment if the transportation robot 1 is in the to-be-loaded state at the current moment, and otherwise judges whether the self body is in the to-be-unloaded state at the current moment. Of course, it may be determined whether the transportation robot is in the state to be unloaded at the current time, and if the transportation robot 1 is in the state to be unloaded at the current time, the determination is stopped, otherwise, it is determined whether the transportation robot is in the state to be loaded at the current time.
Once the current transport robot 1 judges that the current transport robot is in a to-be-loaded state, the current transport robot judges according to the working state information of all the conveyors 2, judges whether each conveyor 2 is in a to-be-sent state at the current moment, stops judging if the current conveyor 2 is in the to-be-sent state at the current moment, determines that the current conveyor 2 corresponding to the to-be-sent state at the current moment is the target conveyor 2, and determines that the type of the conveying task of the transport robot 1 is a cargo receiving type. Otherwise, switching to the next conveyor 2 continues to be performed until the current transport robot 1 determines that one conveyor 2 is the target conveyor 2.
Once the current transportation robot 1 judges that the current transportation robot is in the state to be unloaded, the current transportation robot judges according to the working state information of all the conveying devices 2, judges whether each conveying device 2 is in the state to be received at the current moment, stops judging if the current conveying device 2 is in the state to be received at the current moment, determines that the current conveying device 2 corresponding to the state to be received at the current moment is the target conveying device 2, and determines that the type of the conveying task of the transportation robot 1 is the cargo throwing type. Otherwise, switching to the next conveyor 2 continues to be performed until the current transport robot 1 determines that one conveyor 2 is the target conveyor 2.
Illustratively, as shown in fig. 2, the arrangement direction of the plurality of indicator lights 21 includes a horizontal arrangement or a vertical arrangement, and the target image is divided into light-up areas corresponding to the indicator lights 21 (shape indicator lights 21) with different colors according to the arrangement direction of the indicator lights 21, so as to facilitate color identification (or shape identification) of the indicator lights 21. It is assumed that the two indicator lights 21 are transversely arranged and are a red indicator light 21 (or a circular indicator light 21) and a green indicator light 21 (or a square indicator light 21), respectively, and that when the red indicator light 21 (or the circular indicator light 21) is turned on and the green indicator light 21 (or the square indicator light 21) is turned off, the conveyor 2 is in a state to be sent, otherwise, the conveyor 2 is in a state to be received. The transport robot 1 photographs and acquires a target image, and performs color recognition (or shape recognition) of the indicator lamp 21 and on-off state recognition of the indicator lamp on the photographed target image. If the red indicator light 21 (or the circular indicator light 21) is turned on, it is recognized that the indicator light 21 is a red light (or a circular light), so that the transport robot 1 determines that the conveyor 2 on which the red light (or the circular light) is turned on is the target conveyor 2 to which the transport robot is docked, and determines that the type of the transportation task of the transport robot 1 is the cargo receiving type. If the green indicator light 21 (or the square indicator light 21) is turned on, it is recognized that the indicator light 21 is a green light (or a square light), so that the transport robot 1 determines that the conveyor 2 on which the green light (or the square light) is turned on is the target conveyor 2 to be docked with itself, and determines that the type of the transportation task of the transport robot 1 is the cargo release type.
In this embodiment, the transportation robot 1 can independently carry out the formulation of task and generate according to the body state information of self and conveyer 2's operating condition information to accomplish the mutual butt joint between transportation robot 1 and the conveyer 2 and accomplish goods scheduling and carry work, whole journey does not need the scheduling party to participate in just can accomplishing the transportation scheduling, reduces because of the scheduling party with the delay nature of the transportation task that leads to, and then promotes butt joint efficiency and goods transportation efficiency, improves and uses the prevalence.
Based on the foregoing embodiment, further comprising the steps of:
s401, when at least two candidate conveying devices 2 matched with the working states of the self are obtained through analysis according to the body state information and the working state information, calculating a distance value between the self and each candidate conveying device 2;
s402 compares the magnitudes of all the distance values, and determines the candidate transmission device 2 with the lowest distance value as the target transmission device 2.
Specifically, if the application scene includes a plurality of conveyors 2, the transportation robot 1 may analyze, according to the body state information and the working state information of the conveyors 2, at least two candidate conveyors 2 matching the to-be-loaded/unloaded state of its own cargo, that is, when the transportation robot 1 is in the to-be-loaded state, there may be at least two conveyors 2 in a to-be-sent state, and when the transportation robot 1 is in the to-be-unloaded state, there may be at least two conveyors 2 in a to-be-received state. At this time, the transport robot 1 may perform communication interaction with all the candidate conveyors 2 matching its own operating state, and detect the signal strength when it communicates with each candidate conveyor 2, thereby calculating the distance value between itself and each candidate conveyor 2 according to the signal strength. Of course, it is also possible to transmit a detection signal (for example, laser or infrared ray, etc.) to each candidate conveyor 2, and after receiving the reflected detection signal, calculate a distance value between the transport robot 1 itself and each candidate conveyor 2 according to the transmission time of the transmitted detection signal, the reception time of the received reflected detection signal, and the transmission rate of the detection signal. In any case, the method of calculating the distance value between the transport robot 1 and each candidate conveyor 2 is within the scope of the present invention. After calculating the distance value between the transport robot 1 and each candidate transport apparatus 2, the transport robot compares the magnitude of all the distance values, and sets the candidate transport apparatus 2 corresponding to the smallest distance value as the target transport apparatus 2.
In this embodiment, when the transport robot 1 determines that there are at least two candidate conveyors 2, a downtime phenomenon that the target conveyor 2 that is docked with itself to complete the cargo handling operation cannot be selected is avoided. When at least two candidate conveying devices 2 are judged to be arranged, the candidate conveying device 2 with the minimum distance value can be selected as the target conveying device 2 independently, and the condition that the goods are conveyed by each conveying robot 1 in the whole goods conveying process at the minimum distance is guaranteed, so that the goods conveying efficiency is improved.
Based on the foregoing embodiment, the step S500 of obtaining the spatial position of the target transmission device 2, and navigating to the position of the target transmission device 2 specifically includes the steps of:
s510, detecting and identifying at least four target semantic points 22 of the target transmission device 2 in the target image through a visual detection algorithm; the at least four target semantic points 22 are the maximum outline vertices of the target transmission device 2, and the at least four target semantic points 22 are not coplanar;
s520, calculating to obtain a first space position of the target conveying device 2 according to the size information of the target conveying device 2;
s530 navigates to the location of the target transmission device 2 according to the first spatial location.
Specifically, as shown in fig. 2, after the transportation robot 1 determines the target transmission device 2, image recognition is performed according to the target image corresponding to the determined target transmission device 2, and since the angles of the target transmission device 2 in the target image captured and acquired by the transportation robot 1 may be different in an unconstrained scene, after the target transmission device 2 is recognized by a visual detection algorithm, at least four target semantic points 22 of the target transmission device 2 are regressively recognized. The semantic point is a point which can be described on the transmission device 2, is fixed and has high identifiability, and the semantic point can specifically describe the position of the transmission device 2 in the application scene, so that the target semantic point is the semantic point on the target transmission device 2. Due to the invariance of the transmitting device 2, the spatial position of the semantic point relative to the application scene is determined, and because the semantic point has high identifiability, the semantic point is easy to regress and recognize when recognizing the subsequent image relative to other points on the transmitting device 2. The more semantic points and the more accurate the semantic points (i.e., the closer the semantic points are to the maximum contour vertex of the conveyor 2), the more accurate the transport robot 1 calculates the first spatial position at which the target conveyor 2 is located.
After the transportation robot 1 shoots and acquires a target image, the transportation robot 1 carries out positioning and identification on the target image by the target transmission device 2 through the existing visual detection algorithm, and at least four target semantic points 22 of the target transmission device 2 are positioned and identified, wherein regression of the target semantic points 22 is similar to facial feature point regression identification of a human face. For example, the second preset neural network model is based on fast-rcnn, the backend network adopts mobilenetv2, the target transmission device 2 in the positioning target image is identified through the second preset neural network model, and then the positioning identification of at least four target semantic points 22 is carried out. Also for example, the second preset neural network model is based on R-CNN (or SPP-NET, Fast R-CNN, YOLO, SSD), the backend network uses mobilenetv2 (or mobilenetv1), the target transfer device 2 in the target image is identified and located through the second preset neural network model, and then the location identification of at least four target semantic points 22 is performed. Since the dimension of the target transferring device 2 is known, the spatial coordinates of at least four target semantic points 22 relative to the image acquisition module 11, that is, the spatial coordinates of the target transferring device 2 relative to the image acquisition module 11, can be directly calculated according to the EPnP algorithm by using the calibration result of the camera. Since the image acquisition module 11 is arranged at a fixed position of the transport robot 1, the spatial coordinates of the transport robot 1 relative to the image acquisition module 11 can be calculated, since the pixel coordinates of the image of the target transmission device 2 imaged in the image acquisition module 11 are known, the spatial coordinates of the target transmission device 2 on a world coordinate system are calculated according to the conversion relation between the world coordinate system and a camera coordinate system, and since the origin of establishing the world coordinate system is known, the first spatial position of the target transmission device 2 in an application scene can be obtained.
Through the embodiment, the constructed preset neural network model is composed of a basic structure and a rear-end network, the target transmission device 2 in the target image is identified through cascade regression, then the positioning identification of at least four target semantic points 22 is carried out, the mode of step-by-step regression identification is realized by firstly identifying the target transmission device 2 of the positioning target object through rough regression, then the target semantic points 22 are identified through fine regression, a cascade regression from rough to fine is formed, then the phenomenon of overfitting is effectively avoided, and the identification positioning speed and the identification positioning effect are greatly improved.
The training process for constructing and training the first preset neural network model and the second preset neural network model is the prior art, exemplarily, the target transmission device 2 is framed in advance and four target semantic points 22 are defined, training sample images of the target transmission device 2 which is calibrated in advance and the four target semantic points 22 are obtained, the training is carried out through the training sample images to construct the second preset neural network model, and then the second preset neural network model is identified according to the trained second preset neural network model. Similarly, the first preset neural network model is obtained by constructing and training in the above manner, and will not be described in detail here.
Based on the foregoing embodiment, the step S500 of obtaining the spatial position of the target transmission device 2, and navigating to the position of the target transmission device 2 specifically includes the steps of:
s540, emitting detection laser to the support legs 23 of the target conveying device 2, and acquiring the laser coordinate of each support leg 23 on a laser coordinate system;
s550, calculating according to the laser coordinates to obtain a second space position of the target conveying device 2;
s560 navigates to the location of the target delivery device 2 according to the second spatial location.
Specifically, the detection laser is transmitted to the legs 23 of the object transfer device 2 shown in fig. 2 by the laser transceiver device provided on the transport robot 1. As shown in fig. 3, Ow-XwYwZw is defined as a world coordinate system, Oc-XcYcZc is defined as a laser coordinate system, a laser emission direction is defined as an x-axis, a laser scanning direction is defined as a y-axis, the x-axis and the y-axis form a scanning plane, and a z-axis perpendicular to the scanning plane is defined as a z-axis. Assuming that the point P is any one of the center points of the legs 23 of the target transport apparatus 2, the transport robot 1 can calculate the spatial coordinates of the center point of the leg 23 on the world coordinate system. The relationship between the laser coordinate system and the world coordinate system is:
Figure GDA0002895756400000171
wherein, the conversion matrix is R, which comprises laser rotation matrix and laser translation matrix laser. [ m, n, 1] is the homogeneous coordinate of the laser coordinates of the laser point P' corresponding to the point P in the laser coordinate system, [ Xw, Yw, Zw, 1] is the homogeneous coordinate of the world coordinates of the point P in the world coordinate system. The laser rotation matrix and the laser translation matrix can be obtained by calculating through a plurality of groups of laser coordinates and world coordinates, which is not described in detail herein for the prior art. The world coordinate system is only constructed for better describing the space positions of the laser transceiver and the target transmission device 2, because the laser transceiver is fixedly arranged on the transportation robot 1, and the space position of the transportation robot 1 in the application scene is known, because the laser coordinate of the center point 231 of the leg 23 of the target transmission device 2 on the laser coordinate system is known, the space coordinate of the center point 231 of the leg 23 of the target transmission device 2 on the world coordinate system is calculated according to the conversion relation between the world coordinate system and the laser coordinate system, and because the origin of the established world coordinate system is known, the second space position of the center point 231 of the leg 23 of the target transmission device 2 in the application scene can be obtained.
Through the embodiment, the laser is adopted for positioning, so that the vertical accurate positioning can be performed in any environment and any place, the adaptability is strong, and the accuracy is high.
One embodiment of the present invention, as shown in fig. 4, is a dispatch control system of a transport robot 1, including: a plurality of transport robots 1 and a conveyor 2; the conveying device 2 is provided with an indicator light 21 for indicating the working state of the conveying device; each transport robot 1 includes: the system comprises an image acquisition module 11, a processing module 13, a detection module 12, an analysis module 14, a control module 15 and an execution module 16;
the detection module 12 is used for detecting the self working state to obtain the body state information;
the image acquisition module 11 is used for shooting and acquiring a target image, and the target image comprises a conveying device 2 and an indicator light 21 arranged at the conveying device 2;
the processing module 13 is connected with the image acquisition module 11 and used for analyzing the target image to obtain the working state information of the transmission device 2;
the analysis module 14 is respectively connected with the processing module 13 and the detection module 12 and is used for analyzing and obtaining the target transmission device 2 butted with the analysis module and the transmission task type of the analysis module according to the body state information and the working state information of the transmission device;
The control module 15 is connected with the analysis module 14 and used for acquiring the spatial position of the target conveying device 2 and navigating and moving to the position of the target conveying device 2;
and the execution module 16 is connected with the analysis module 14 and is used for docking with the target conveying device 2 according to the type of the conveying task after the target conveying device 2 is moved to the position, so as to complete loading or unloading of the goods.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the processing module 13 includes: a first image recognition unit and a first processing unit;
the first image recognition unit is used for carrying out indicator lamp state recognition on a target image by utilizing a preset neural network model through a visual detection algorithm;
the first processing unit is connected with the first image recognition unit and used for analyzing and obtaining the working state information of the conveying device 2 according to the state recognition result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicator light shape and color state comprises an indicator light color state and an indicator light shape state.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, each transport robot 1 further includes: a wireless communication module;
and the wireless communication module is connected with the wireless communication modules of the other transport robots 1 and is used for transmitting and sharing the work state information of the transmission device 2 obtained by respective analysis.
Based on the foregoing embodiment, the analysis module 14 includes: a judging unit and a first determining unit;
the judging unit is used for judging whether the body is in a state to be loaded or unloaded according to the body state information;
the first determining unit is connected with the judging unit and used for determining that the conveying device 2 corresponding to the state of the working state information to be sent is a target conveying device 2 and determining that the type of the conveying task is a cargo receiving type when the first determining unit is in the state to be loaded; when the self is in a state of waiting to be unloaded, the conveyer 2 corresponding to the state of waiting to be received is determined as the target conveyer 2, and the type of the conveying task is determined as the type of cargo delivery.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the analysis module 14 further includes: a second processing unit and a second determining unit;
The second processing unit is used for calculating the distance value between the second processing unit and each candidate conveying device 2 and comparing the size of all the distance values when at least two candidate conveying devices 2 matched with the working state of the second processing unit are obtained through analysis according to the body state information and the working state information;
and the second determining unit is connected with the second processing unit and is used for determining the candidate transmission device 2 with the lowest distance value as the target transmission device 2.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the control module 15 includes: the second image recognition unit, the third processing unit and the first navigation moving unit;
a second image recognition unit for detecting and recognizing at least four target semantic points 22 of the target transmission device 2 in the target image by a visual detection algorithm; the target semantic point 22 is a fixed and highly recognizable point on the target transmission device 2;
the third processing unit is connected with the second image recognition unit and used for calculating and obtaining the first spatial position of the target conveying device 2 according to the size information of the target conveying device 2;
And the first navigation moving unit is connected with the third processing unit and is used for navigating and moving to the position of the target transmission device 2 according to the first space position.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the control module 15 includes: the laser detection unit, the fourth processing unit and the second navigation moving unit;
the laser detection unit is used for emitting detection laser to the support legs 23 of the target conveying device 2 and acquiring the laser coordinate of each support leg 23 on the laser coordinate system;
the fourth processing unit is connected with the laser detection unit and used for calculating a second space position of the target conveying device 2 according to the laser coordinates;
and the second navigation moving unit is connected with the fourth processing unit and is used for navigating and moving to the position of the target transmission device 2 according to the second space position.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A scheduling control method of a transport robot, characterized by comprising the steps of:
detecting the working state of the body to obtain body state information; the body state information comprises a to-be-loaded state and a to-be-unloaded state;
shooting and acquiring a target image, wherein the target image comprises a conveying device and an indicator light which is arranged at the conveying device and is used for indicating the working state of the conveying device;
analyzing the target image to obtain the working state information of the conveying device; the working state information comprises a state to be sent and a state to be received;
analyzing and obtaining a target transmission device which is butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device; the delivery task type comprises a cargo receiving type and a cargo releasing type;
acquiring the spatial position of the target conveying device, and navigating and moving to the position of the target conveying device;
and docking with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods.
2. The scheduling control method of a transportation robot according to claim 1, wherein the analyzing the target image to obtain the operating state information of the transportation device specifically includes:
Carrying out indicator lamp state identification on the target image by using a preset neural network model through a visual detection algorithm;
analyzing and obtaining the working state information of the conveying device according to the state identification result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
3. The scheduling control method of a transport robot according to claim 1, wherein the step of, after analyzing the target image to obtain the operating state information of the transport device, analyzing the target transport device docked with the transport robot and the transport task type of the transport robot based on the body state information and the operating state information of the transport device includes:
and the working state information of the conveying device obtained by respective analysis is transmitted and shared with other conveying robots.
4. The scheduling control method of a transportation robot according to claim 1, wherein the step of analyzing the target transportation device docked with the transportation robot and the transportation task type of the transportation robot according to the body state information and the working state information of the transportation device specifically comprises the steps of:
Judging whether the body is in a state to be loaded or unloaded according to the body state information;
when the self is in a to-be-loaded state, determining that a conveying device in a to-be-sent state is a target conveying device, and determining that the type of the conveying task is a cargo receiving type;
and when the self is in a to-be-unloaded state, determining that the conveyer in a to-be-received state is a target conveyer, and determining that the type of the conveying task is a cargo delivery type.
5. The scheduling control method of a transport robot according to any one of claims 1 to 4, further comprising the steps of:
when at least two candidate conveying devices matched with the working states of the self-body and the working state information are obtained through analysis according to the self-body state information and the working state information, the distance value between the self-body and each candidate conveying device is calculated;
and comparing the sizes of all the distance values, and determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
6. The scheduling control method of a transportation robot according to any one of claims 1 to 4, wherein the acquiring the spatial position of the target transmission device and navigating to the position of the target transmission device specifically comprises the steps of:
Detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
calculating to obtain a first spatial position of the target conveying device according to the size information of the target conveying device;
and navigating and moving to the position of the target conveying device according to the first space position.
7. The scheduling control method of a transportation robot according to any one of claims 1 to 4, wherein the acquiring the spatial position of the target transmission device and navigating to the position of the target transmission device specifically comprises the steps of:
emitting detection laser to the support legs of the target conveying device, and acquiring the laser coordinate of each support leg on a laser coordinate system;
calculating to obtain a second space position of the target conveying device according to the laser coordinates;
and navigating and moving to the position of the target conveying device according to the second space position.
8. A dispatch control system for a transport robot, comprising: a plurality of transport robots and transport devices; the conveying device is provided with an indicator light for indicating the working state of the conveying device; each of the transport robots includes: the device comprises an image acquisition module, a processing module, a detection module, an analysis module, a control module and an execution module;
The detection module is used for detecting the working state of the detection module to obtain the body state information; the body state information comprises a to-be-loaded state and a to-be-unloaded state;
the image acquisition module is used for shooting and acquiring a target image, and the target image comprises a conveying device and an indicator lamp arranged at the conveying device;
the processing module is connected with the image acquisition module and used for analyzing the target image to obtain the working state information of the transmission device; the working state information comprises a state to be sent and a state to be received;
the analysis module is respectively connected with the processing module and the detection module and is used for analyzing and obtaining a target transmission device which is butted with the analysis module and the type of the transmission task of the analysis module according to the body state information and the working state information of the transmission device; the delivery task type comprises a cargo receiving type and a cargo releasing type;
the control module is connected with the analysis module and used for acquiring the spatial position of the target conveying device and navigating and moving to the position of the target conveying device;
and the execution module is connected with the analysis module and is used for butting with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods after moving to the position of the target conveying device.
9. The dispatch control system of a transfer robot of claim 8, wherein the processing module comprises: a first image recognition unit and a first processing unit;
the first image recognition unit is used for carrying out indicator lamp state recognition on the target image by using a preset neural network model through a visual detection algorithm;
the first processing unit is connected with the first image recognition unit and used for analyzing and obtaining the working state information of the conveying device according to the state recognition result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
10. The scheduling control system of transport robots according to claim 8, wherein each of the transport robots further comprises: a wireless communication module;
and the wireless communication module is connected with the wireless communication modules of the other transport robots and is used for transmitting and sharing the working state information of the transmission device obtained by respective analysis.
11. The dispatch control system of a transfer robot of claim 8, wherein the analysis module comprises: a judging unit and a first determining unit;
The judging unit is used for judging whether the body is in a state to be loaded or unloaded according to the body state information;
the first determining unit is connected with the judging unit and used for determining that the conveying device to be sent is a target conveying device and determining that the type of the conveying task is a cargo receiving type when the first determining unit is in a to-be-loaded state; and when the self is in a to-be-unloaded state, determining that the conveyer in a to-be-received state is a target conveyer, and determining that the type of the conveying task is a cargo delivery type.
12. The dispatch control system of a transfer robot of any one of claims 8-11, wherein the analysis module further comprises: a second processing unit and a second determining unit;
the second processing unit is used for calculating a distance value between the second processing unit and each candidate transmission device and comparing the size of all the distance values when at least two candidate transmission devices matched with the working state of the second processing unit are obtained through analysis according to the body state information and the working state information;
the second determining unit is connected with the second processing unit and is used for determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
13. The dispatch control system of a transfer robot of any one of claims 8-11, wherein the control module comprises: the second image recognition unit, the third processing unit and the first navigation moving unit;
the second image identification unit is used for detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
the third processing unit is connected with the second image recognition unit and used for calculating and obtaining a first spatial position of the target conveying device according to the size information of the target conveying device;
and the first navigation moving unit is connected with the third processing unit and used for navigating and moving to the position of the target conveying device according to the first space position.
14. The dispatch control system of a transfer robot of any one of claims 8-11, wherein the control module comprises: the laser detection unit, the fourth processing unit and the second navigation moving unit;
the laser detection unit is used for emitting detection laser to the support legs of the target conveying device and acquiring the laser coordinate of each support leg on a laser coordinate system;
The fourth processing unit is connected with the laser detection unit and used for calculating a second spatial position of the target conveying device according to the laser coordinates;
and the second navigation moving unit is connected with the fourth processing unit and used for navigating and moving to the position of the target conveying device according to the second space position.
CN201910533614.9A 2019-06-20 2019-06-20 Dispatching control method and system for transport robot Active CN110223212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910533614.9A CN110223212B (en) 2019-06-20 2019-06-20 Dispatching control method and system for transport robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910533614.9A CN110223212B (en) 2019-06-20 2019-06-20 Dispatching control method and system for transport robot

Publications (2)

Publication Number Publication Date
CN110223212A CN110223212A (en) 2019-09-10
CN110223212B true CN110223212B (en) 2021-05-18

Family

ID=67814027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910533614.9A Active CN110223212B (en) 2019-06-20 2019-06-20 Dispatching control method and system for transport robot

Country Status (1)

Country Link
CN (1) CN110223212B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705453A (en) * 2019-09-29 2020-01-17 中国科学技术大学 Real-time fatigue driving detection method
CN110780651B (en) * 2019-11-01 2022-07-08 四川长虹电器股份有限公司 AGV dispatching system and method
TWI715358B (en) * 2019-12-18 2021-01-01 財團法人工業技術研究院 State estimation and sensor fusion methods for autonomous vehicles
CN113052189B (en) * 2021-03-30 2022-04-29 电子科技大学 Improved MobileNet V3 feature extraction network
CN116341884A (en) * 2023-05-31 2023-06-27 佳都科技集团股份有限公司 Data processing method and system for task emergency assignment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167607B1 (en) * 1981-05-11 2001-01-02 Great Lakes Intellectual Property Vision target based assembly
CN105292892A (en) * 2015-11-11 2016-02-03 江苏汇博机器人技术有限公司 Automatic storage system of industrial robot
CN106526534A (en) * 2016-10-17 2017-03-22 南京理工大学 Device and method for automatic sorting carrying of articles based on radio navigation through moving trolley
CN107003662A (en) * 2014-11-11 2017-08-01 X开发有限责任公司 Position control robot cluster with visual information exchange
CN108891830A (en) * 2018-06-05 2018-11-27 广州市远能物流自动化设备科技有限公司 A kind of dispatch control method and automated guided vehicle of automated guided vehicle
CN109154825A (en) * 2016-07-28 2019-01-04 X开发有限责任公司 inventory management
CN109308072A (en) * 2017-07-28 2019-02-05 杭州海康机器人技术有限公司 The Transmission Connection method and AGV of automated guided vehicle AGV
CN109341689A (en) * 2018-09-12 2019-02-15 北京工业大学 Vision navigation method of mobile robot based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9901210B2 (en) * 2012-01-04 2018-02-27 Globalfoundries Singapore Pte. Ltd. Efficient transfer of materials in manufacturing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167607B1 (en) * 1981-05-11 2001-01-02 Great Lakes Intellectual Property Vision target based assembly
CN107003662A (en) * 2014-11-11 2017-08-01 X开发有限责任公司 Position control robot cluster with visual information exchange
CN105292892A (en) * 2015-11-11 2016-02-03 江苏汇博机器人技术有限公司 Automatic storage system of industrial robot
CN109154825A (en) * 2016-07-28 2019-01-04 X开发有限责任公司 inventory management
CN106526534A (en) * 2016-10-17 2017-03-22 南京理工大学 Device and method for automatic sorting carrying of articles based on radio navigation through moving trolley
CN109308072A (en) * 2017-07-28 2019-02-05 杭州海康机器人技术有限公司 The Transmission Connection method and AGV of automated guided vehicle AGV
CN108891830A (en) * 2018-06-05 2018-11-27 广州市远能物流自动化设备科技有限公司 A kind of dispatch control method and automated guided vehicle of automated guided vehicle
CN109341689A (en) * 2018-09-12 2019-02-15 北京工业大学 Vision navigation method of mobile robot based on deep learning

Also Published As

Publication number Publication date
CN110223212A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110223212B (en) Dispatching control method and system for transport robot
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN109241820B (en) Unmanned aerial vehicle autonomous shooting method based on space exploration
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN103196362B (en) A kind of system of the three-dimensional position for definite relative checkout gear of emitter
WO2021012682A1 (en) Transfer travel method applied to transfer robot and transfer robot thereof
CN109146919A (en) A kind of pointing system and method for combination image recognition and laser aiming
JP2004050390A (en) Work taking out device
EP3836084B1 (en) Charging device identification method, mobile robot and charging device identification system
US10401874B1 (en) Autonomous aircraft navigation
CN108872265A (en) Detection method, device and system
CN108038861A (en) A kind of multi-robot Cooperation method for sorting, system and device
CN113284178A (en) Object stacking method and device, computing equipment and computer storage medium
CN118385157A (en) Visual classified garbage automatic sorting system based on deep learning and self-adaptive grabbing
CN114170521A (en) Forklift pallet butt joint identification positioning method
CN114354630A (en) Image acquisition system and method and display panel processing equipment
CN111975776A (en) Robot movement tracking system and method based on deep learning and Kalman filtering
JP6786242B2 (en) Delivery support device, delivery support system, and delivery support program
CN115082395B (en) Automatic identification system and method for aviation luggage
CN115601736A (en) Airport flight area foreign matter is detection device in coordination
WO2022052807A1 (en) Method and device for determining occupancy state of workbench of handling robot
Tapia et al. A Comparison Between Framed-Based and Event-Based Cameras for Flapping-Wing Robot Perception
CN111854697B (en) Recognition positioning attitude determination system based on visual sensor
CN112093082B (en) On-orbit capture guiding method and device of high-orbit satellite capture mechanism
CN210155326U (en) Relative position appearance measurement system based on near-infrared beacon

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai zhihuilin Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: Shanghai Zhihui Medical Technology Co.,Ltd.

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai Zhihui Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: SHANGHAI MROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 202150 room 205, zone W, second floor, building 3, No. 8, Xiushan Road, Chengqiao Town, Chongming District, Shanghai (Shanghai Chongming Industrial Park)

Patentee after: Shanghai Noah Wood Robot Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Patentee before: Shanghai zhihuilin Medical Technology Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A scheduling control method and system for transportation robots

Granted publication date: 20210518

Pledgee: CITIC Bank Limited by Share Ltd. Shanghai branch

Pledgor: Shanghai Noah Wood Robot Technology Co.,Ltd.

Registration number: Y2024310000751