Disclosure of Invention
The invention aims to provide a dispatching control method and a dispatching control system for a transport robot, which can complete transportation dispatching without dispatching party participation in the whole process, reduce the delay of transportation tasks caused by the dispatching party participation, further improve the butt joint efficiency and the cargo transportation efficiency, and improve the application popularization rate.
The technical scheme provided by the invention is as follows:
the invention provides a dispatching control method of a transport robot, which comprises the following steps:
detecting the working state of the body to obtain body state information;
shooting and acquiring a target image, wherein the target image comprises a conveying device and an indicator light which is arranged at the conveying device and is used for indicating the working state of the conveying device;
analyzing the target image to obtain the working state information of the conveying device;
analyzing and obtaining a target transmission device which is butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device;
acquiring the spatial position of the target conveying device, and navigating and moving to the position of the target conveying device;
and docking with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods.
Further, the analyzing the target image to obtain the working state information of the transmission device specifically includes:
carrying out indicator lamp state identification on the target image by using a preset neural network model through a visual detection algorithm;
analyzing and obtaining the working state information of the conveying device according to the state identification result of the indicator light;
The indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
Further, after analyzing the target image to obtain the working state information of the conveying device, according to the body state information and the working state information of the conveying device, before analyzing and obtaining the target conveying device butted with the target conveying device and the conveying task type of the target conveying device, the method comprises the following steps:
and the working state information of the conveying device obtained by respective analysis is transmitted and shared with other conveying robots.
Further, the analyzing and obtaining the target transmission device butted with the transmission device and the type of the transmission task of the transmission device according to the body state information and the working state information of the transmission device specifically comprises the following steps:
judging whether the body is in a state to be loaded or unloaded according to the body state information;
when the self is in a to-be-loaded state, determining that a conveying device corresponding to the state of the working state information to be sent is a target conveying device, and determining that the type of the conveying task is a cargo receiving type;
and when the self is in a state to be unloaded, determining that the conveyer corresponding to the state of waiting for receiving the working state information is a target conveyer, and determining that the type of the conveying task is a cargo putting type.
Further, the method also comprises the following steps:
when at least two candidate conveying devices matched with the working states of the self-body and the working state information are obtained through analysis according to the self-body state information and the working state information, the distance value between the self-body and each candidate conveying device is calculated;
and comparing the sizes of all the distance values, and determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
Further, the obtaining the spatial position of the target transmission device and navigating to the position of the target transmission device specifically includes:
detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
calculating to obtain a first spatial position of the target conveying device according to the size information of the target conveying device;
and navigating and moving to the position of the target conveying device according to the first space position.
Further, the obtaining the spatial position of the target transmission device and navigating to the position of the target transmission device specifically includes:
Emitting detection laser to the support legs of the target conveying device, and acquiring the laser coordinate of each support leg on a laser coordinate system;
calculating to obtain a second space position of the target conveying device according to the laser coordinates;
and navigating and moving to the position of the target conveying device according to the second space position.
The present invention also provides a scheduling control system of a transport robot, comprising: a plurality of transport robots and transport devices; the conveying device is provided with an indicator light for indicating the working state of the conveying device; each of the transport robots includes: the device comprises an image acquisition module, a processing module, a detection module, an analysis module, a control module and an execution module;
the detection module is used for detecting the working state of the detection module to obtain the body state information;
the image acquisition module is used for shooting and acquiring a target image, and the target image comprises a conveying device and an indicator lamp arranged at the conveying device;
the processing module is connected with the image acquisition module and used for analyzing the target image to obtain the working state information of the transmission device;
the analysis module is respectively connected with the processing module and the detection module and is used for analyzing and obtaining a target transmission device which is butted with the analysis module and the type of the transmission task of the analysis module according to the body state information and the working state information of the transmission device;
The control module is connected with the analysis module and used for acquiring the spatial position of the target conveying device and navigating and moving to the position of the target conveying device;
and the execution module is connected with the analysis module and is used for butting with the target conveying device according to the type of the conveying task to finish loading or unloading of the goods after moving to the position of the target conveying device.
Further, the processing module comprises: a first image recognition unit and a first processing unit;
the first image recognition unit is used for carrying out indicator lamp state recognition on the target image by using a preset neural network model through a visual detection algorithm;
the first processing unit is connected with the first image recognition unit and used for analyzing and obtaining the working state information of the conveying device according to the state recognition result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicating lamp shape and color states comprise indicating lamp color states and indicating lamp shape states.
Further, each of the transport robots further includes: a wireless communication module;
And the wireless communication module is connected with the wireless communication modules of the other transport robots and is used for transmitting and sharing the working state information of the transmission device obtained by respective analysis.
Further, the analysis module comprises: a judging unit and a first determining unit;
the judging unit is used for judging whether the body is in a state to be loaded or unloaded according to the body state information;
the first determining unit is connected with the judging unit and used for determining that the conveying device corresponding to the state of the working state information to be sent is a target conveying device and determining that the type of the conveying task is a cargo receiving type when the first determining unit is in a to-be-loaded state; and when the self is in a state to be unloaded, determining that the conveyer corresponding to the state of waiting for receiving the working state information is a target conveyer, and determining that the type of the conveying task is a cargo putting type.
Further, the analysis module further comprises: a second processing unit and a second determining unit;
the second processing unit is used for calculating a distance value between the second processing unit and each candidate transmission device and comparing the size of all the distance values when at least two candidate transmission devices matched with the working state of the second processing unit are obtained through analysis according to the body state information and the working state information;
The second determining unit is connected with the second processing unit and is used for determining the candidate transmission device corresponding to the lowest distance value as the target transmission device.
Further, the control module includes: the second image recognition unit, the third processing unit and the first navigation moving unit;
the second image identification unit is used for detecting and identifying at least four target semantic points of a target transmission device in the target image through a visual detection algorithm; the target semantic point is a point which is fixed on the target transmission device and has high identifiability;
the third processing unit is connected with the second image recognition unit and used for calculating and obtaining a first spatial position of the target conveying device according to the size information of the target conveying device;
and the first navigation moving unit is connected with the third processing unit and used for navigating and moving to the position of the target conveying device according to the first space position.
Further, the control module includes: the laser detection unit, the fourth processing unit and the second navigation moving unit;
the laser detection unit is used for emitting detection laser to the support legs of the target conveying device and acquiring the laser coordinate of each support leg on a laser coordinate system;
The fourth processing unit is connected with the laser detection unit and used for calculating a second spatial position of the target conveying device according to the laser coordinates;
and the second navigation moving unit is connected with the fourth processing unit and used for navigating and moving to the position of the target conveying device according to the second space position.
By the dispatching control method and the dispatching control system for the transport robot, the transport dispatching can be completed without the dispatching party in the whole process, the delay of the transport task caused by the dispatching party is reduced, the butt joint efficiency and the cargo transport efficiency are improved, and the application popularization rate is improved.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
In an embodiment of the present invention, as shown in fig. 1 and 2, a scheduling control method of a transport robot 1 includes:
S100, detecting the working state of the user to obtain body state information;
s200, shooting and acquiring a target image, wherein the target image comprises a conveying device 2 and an indicator light 21 which is arranged at the conveying device 2 and is used for indicating the working state of the conveying device 2;
s300, analyzing the target image to obtain the working state information of the conveying device 2;
specifically, the transportation robot 1 queries the working state of itself to obtain the body state information, the body state information of the transportation robot 1 includes a waiting state where the transportation robot is idle and has a waiting state for the transportation device to unload the goods to the transportation robot 1, a waiting state where the transportation robot is idle and has a waiting state for the transportation robot 1 to unload the goods to the transportation device, and a goods loading executing state where the transportation robot 1 is in a butt joint with the transportation device to load the goods, or a goods unloading executing state where the transportation robot 1 is in a butt joint with the transportation device to unload the goods. In addition, the ontology status information may further include available resource information obtained by analyzing the system resources (the system resources include power resources, CPU resources, and the like), that is, remaining power information, remaining CPU information, and the like.
The transportation robot 1 is provided with an image acquisition module 11, and the image acquisition module 11 is arranged at any fixed position on the front side of the body of the transportation robot 1, so that the image acquisition module 11 can shoot images in the shooting range on the front side of the body of the transportation robot 1. The image acquisition module 11 includes a camera, a depth camera, and the like. The transport robot 1 controls the image capture module 11 to capture a target image including the conveyor 2 and the indicator light 21 provided at the conveyor 2 and indicating the operating state of the conveyor 2. Lens adjustment is needed before the target image is obtained through formal shooting, so that the influence of a virtual focus phenomenon on the quality of the collected target image is avoided. The target image may be a picture, or may be an image frame obtained by performing shot segmentation on a video. After acquiring a target image, the transportation robot 1 performs image preprocessing on the target image, where the image preprocessing includes graying, binarization, filtering, and the like, and the image preprocessing process is not described herein in detail for the prior art. And analyzing the target image subjected to image preprocessing to obtain the working state information of each conveying device 2. The working state information of the transmission device includes that the transmission device 2 is idle and has a to-be-received state waiting for the transportation robot 1 to unload the goods to the transmission device, the transmission device 2 is idle and has a to-be-sent state waiting for the transmission device to unload the goods to the transportation robot 1, and the transmission device is in a goods loading execution state in which the transmission device is in butt joint with the transportation robot 1 to load the goods, or the transmission device is in a goods unloading execution state in which the transmission device is in butt joint with the transportation robot 1 to unload the goods.
S400, analyzing and obtaining the target transmission device 2 butted with the transmission device and the transmission task type of the transmission device according to the body state information and the working state information of the transmission device;
s500, acquiring the spatial position of the target conveying device 2, and navigating and moving to the position of the target conveying device 2;
and S600, butting with the target conveying device 2 according to the type of the conveying task to finish loading or unloading the goods.
Specifically, the transportation robot 1 analyzes the acquired body state information and the work state information obtained by analysis to obtain the transportation task type of each transportation robot 1 and the target transfer device 2, which is the docking target of each transportation robot 1. Then the transportation robot 1 positions and obtains position information of the transportation robot 1, obtains a space position of the target conveying device 2, autonomously performs path planning according to the position information and the space position to generate a moving route, then moves to the position of the target conveying device 2 according to the generated moving route, and after the transportation robot 1 reaches the position of the target conveying device 2, performs information interaction with the target conveying device 2, namely the transportation robot 1 gives a trigger signal of loading and unloading readiness to the target conveying device 2, so that the transportation robot 1 and the target conveying device 2 are mutually butted, and the cargo loading butt joint operation or the cargo unloading butt joint operation between the transportation robot 1 and the target conveying device 2 is completed.
In this embodiment, it needs to determine whether the transportation robot 1 receives goods from the transportation device 2 or delivers goods to the transportation device 2, and therefore each transportation robot 1 needs to obtain its own body state information, because the transportation device 2 may be in a goods delivery state or a goods receiving state, and there may be a plurality of transportation devices 2 in application scenes such as hospitals, logistics warehouses, supermarkets or libraries, and the placement positions of a plurality of transportation devices 2 may be concentrated, it needs to determine and analyze to obtain the working state information of each transportation device 2, then according to the body state information of the transportation robot 1 and the working state information of the transportation devices 2, the transportation robot 1 itself matches and searches for the target transportation device 2 docked with itself, and further formulates and generates a corresponding goods transportation task, and locates the target transportation device 2 docked with itself to obtain the spatial position of the target transportation device 2, and then the target conveyor 2 is navigated to perform the loading and unloading operation (including the cargo loading and docking operation or the cargo unloading and docking operation). According to the invention, the transport robot 1 and the conveying device 2 are organically linked, the butt joint object is autonomously determined through linkage, and the cargo is automatically navigated and moved to the destination to complete the loading and unloading operation of the cargo, so that mutual complementation is realized, the overall efficiency is improved, the wider application is achieved, the butt joint problem between the transport robot 1 and the conveying device 2 is effectively solved, and the transport scheduling can be completed without the need of a scheduling party in the whole process, so that the delay of the transport task caused by the scheduling party is reduced, the butt joint efficiency and the cargo conveying efficiency are further improved, and the application popularization rate is improved.
Based on the foregoing embodiment, the step S200 of analyzing the target image to obtain the operating state information of the transmission device 2 specifically includes the steps of:
s210, carrying out indicator lamp state identification on a target image by using a preset neural network model through a visual detection algorithm;
s220, analyzing and obtaining the working state information of the conveying device 2 according to the state identification result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicator light shape and color state comprises an indicator light color state and an indicator light shape state.
Specifically, after the transportation robot 1 acquires a target image in real time, the target image is preprocessed, and the status of the indicator lamp is recognized at a local part of the preprocessed target image, which includes the indicator lamp 21, so that a status recognition result of the indicator lamp is obtained. Preferably, the external environment has a great influence on the color of the indicator light 21, and even the color of the indicator light 21 may be filtered as noise during image preprocessing, so that the indicator light 21 indicates the working state of the conveying device 2 at the current moment through different color states, and can be effectively complemented by combining the shape of the indicator light 21, effectively eliminate the interference of external factors such as illumination and the like and the color attenuation of the indicator light 21 on the color of the indicator light 21, thereby realizing that the working state information of the conveying device 2 can be accurately identified and obtained under the influence of various external and internal factors, further improving the accuracy of the conveying robot 1 in finding out the target conveying device 2 matched with the working state of the conveying robot according to more accurate and reliable working state information and combining the body state information, and improving the accuracy of the automatic scheduling of the moving robot to complete cargo docking, because the accuracy of goods butt joint improves, and then reduces the probability of wrong butt joint loading and unloading operation to indirect promotion butt joint efficiency and freight efficiency. In general, the indicator color status includes red, green, yellow, blue, etc., the indicator shape status includes circle, square, triangle, etc., and the indicator on/off status includes an on status of the indicator 21 and an off status of the indicator 21. The state recognition of the indicator lamp is performed through an existing visual detection algorithm, for example, a preset neural network model is based on fast-rcnn, a mobilenetv2 is adopted in a back-end network, the indicator lamp 21 in the target image is recognized and positioned through the preset neural network model, and then the state recognition of the indicator lamp is performed. For example, the preset neural network model is based on R-CNN (or SPP-NET, Fast R-CNN, YOLO, SSD), the backend network adopts mobilenetv2 (or mobilenetv1), the indicator light 21 in the target image is identified and positioned through the preset neural network model, and then the status of the indicator light is identified.
In this embodiment, through the visual detection algorithm, can utilize and predetermine neural network model and carry out the location discernment of pilot lamp 21, and the discernment of pilot lamp state obtains pilot lamp state recognition result, and then effectively, fast, accurate, discern the pilot lamp state recognition result high-efficiently through predetermineeing neural network model, it is better to detect the recognition effect, be convenient for follow-up transport robot 1 obtains conveyer 2's operating condition information according to pilot lamp state recognition result analysis, and then make transport robot 1 according to body state information and operating condition information, match by oneself and seek target conveyer 2, and then formulate and generate the corresponding freight task, accurately, accomplish goods loading operation or uninstallation operation reliably. The transport robot 1 moves towards the position of the target conveying device 2, and when the transport robot 1 reaches the position of the target conveying device 2, the transport robot and the target conveying device are in butt joint with each other, and cargo loading operation or cargo unloading operation is completed, manual participation is not needed, the cost is reduced, and the cargo conveying efficiency is improved.
In an embodiment of the present invention, a scheduling control method of a transport robot 1 includes:
S100, detecting the working state of the user to obtain body state information;
s200, shooting and acquiring a target image, wherein the target image comprises a conveying device 2 and an indicator light 21 which is arranged at the conveying device 2 and is used for indicating the working state of the conveying device 2;
s300, analyzing the target image to obtain the working state information of the conveying device 2;
s301, the working state information of the conveying device 2 obtained by respective analysis is transmitted and shared with the other conveying robots 1;
s400, analyzing and obtaining the target transmission device 2 butted with the transmission device and the transmission task type of the transmission device according to the body state information and the working state information of the transmission device;
s500, acquiring the spatial position of the target conveying device 2, and navigating and moving to the position of the target conveying device 2;
and S600, butting with the target conveying device 2 according to the type of the conveying task to finish loading or unloading the goods.
Specifically, the same parts as those in the above embodiments are not described in detail herein. Compared with the embodiment, each robot performs information interaction with other robots except the robot, so that the work state information of the transmission device 2 obtained by respective analysis is shared with the other robots, the work state information obtained by respective sharing is shared between the transportation robot 1 and the transportation robot 1 in a preset scene area, once the work state information sharing between the transportation robots 1 is achieved, the number of target images of all the transmission devices 2 and the indicator lamps 21 around the transportation robot 1 can be reduced, the probability of repeatedly recognizing the work state information of the same transmission device 2 obtained by analysis at the same time is reduced, the invalid workload is reduced, and the system resource waste caused by the invalid operation is reduced. In addition, due to the sharing of the working state information among the transport robots 1, the tracking of the real-time working state information of all the conveying devices 2 in a wide application scene is expanded, the blind spot rate of each transport robot 1 to the working state information is reduced, the docking success rate of each transport robot 1 and each conveying device 2 in the application scene is further improved, the idle rate of the transport robots 1 and the conveying devices 2 in the whole system is reduced, and the docking efficiency and the cargo conveying efficiency are further indirectly improved.
Based on the foregoing embodiment, the step S400 of analyzing the target transport device 2 docked with itself and the type of the transport task of itself according to the body state information and the working state information of the transport device specifically includes the steps of:
s410, judging whether the self is in a state to be loaded or unloaded according to the body state information;
s420, when the self is in a to-be-loaded state, determining that the conveyer 2 corresponding to the state with the working state information to be sent is a target conveyer 2, and determining that the type of the conveying task is a cargo receiving type;
s430, when the self is in the state to be unloaded, determining that the conveyor 2 corresponding to the state in which the working status information is to be received is the target conveyor 2, and determining that the type of the transportation task is the cargo delivery type.
Specifically, the transportation robot 1 performs analysis and judgment according to the self body state information, judges whether the self body is in a to-be-loaded state at the current moment, stops the judgment if the transportation robot 1 is in the to-be-loaded state at the current moment, and otherwise judges whether the self body is in the to-be-unloaded state at the current moment. Of course, it may be determined whether the transportation robot is in the state to be unloaded at the current time, and if the transportation robot 1 is in the state to be unloaded at the current time, the determination is stopped, otherwise, it is determined whether the transportation robot is in the state to be loaded at the current time.
Once the current transport robot 1 judges that the current transport robot is in a to-be-loaded state, the current transport robot judges according to the working state information of all the conveyors 2, judges whether each conveyor 2 is in a to-be-sent state at the current moment, stops judging if the current conveyor 2 is in the to-be-sent state at the current moment, determines that the current conveyor 2 corresponding to the to-be-sent state at the current moment is the target conveyor 2, and determines that the type of the conveying task of the transport robot 1 is a cargo receiving type. Otherwise, switching to the next conveyor 2 continues to be performed until the current transport robot 1 determines that one conveyor 2 is the target conveyor 2.
Once the current transportation robot 1 judges that the current transportation robot is in the state to be unloaded, the current transportation robot judges according to the working state information of all the conveying devices 2, judges whether each conveying device 2 is in the state to be received at the current moment, stops judging if the current conveying device 2 is in the state to be received at the current moment, determines that the current conveying device 2 corresponding to the state to be received at the current moment is the target conveying device 2, and determines that the type of the conveying task of the transportation robot 1 is the cargo throwing type. Otherwise, switching to the next conveyor 2 continues to be performed until the current transport robot 1 determines that one conveyor 2 is the target conveyor 2.
Illustratively, as shown in fig. 2, the arrangement direction of the plurality of indicator lights 21 includes a horizontal arrangement or a vertical arrangement, and the target image is divided into light-up areas corresponding to the indicator lights 21 (shape indicator lights 21) with different colors according to the arrangement direction of the indicator lights 21, so as to facilitate color identification (or shape identification) of the indicator lights 21. It is assumed that the two indicator lights 21 are transversely arranged and are a red indicator light 21 (or a circular indicator light 21) and a green indicator light 21 (or a square indicator light 21), respectively, and that when the red indicator light 21 (or the circular indicator light 21) is turned on and the green indicator light 21 (or the square indicator light 21) is turned off, the conveyor 2 is in a state to be sent, otherwise, the conveyor 2 is in a state to be received. The transport robot 1 photographs and acquires a target image, and performs color recognition (or shape recognition) of the indicator lamp 21 and on-off state recognition of the indicator lamp on the photographed target image. If the red indicator light 21 (or the circular indicator light 21) is turned on, it is recognized that the indicator light 21 is a red light (or a circular light), so that the transport robot 1 determines that the conveyor 2 on which the red light (or the circular light) is turned on is the target conveyor 2 to which the transport robot is docked, and determines that the type of the transportation task of the transport robot 1 is the cargo receiving type. If the green indicator light 21 (or the square indicator light 21) is turned on, it is recognized that the indicator light 21 is a green light (or a square light), so that the transport robot 1 determines that the conveyor 2 on which the green light (or the square light) is turned on is the target conveyor 2 to be docked with itself, and determines that the type of the transportation task of the transport robot 1 is the cargo release type.
In this embodiment, the transportation robot 1 can independently carry out the formulation of task and generate according to the body state information of self and conveyer 2's operating condition information to accomplish the mutual butt joint between transportation robot 1 and the conveyer 2 and accomplish goods scheduling and carry work, whole journey does not need the scheduling party to participate in just can accomplishing the transportation scheduling, reduces because of the scheduling party with the delay nature of the transportation task that leads to, and then promotes butt joint efficiency and goods transportation efficiency, improves and uses the prevalence.
Based on the foregoing embodiment, further comprising the steps of:
s401, when at least two candidate conveying devices 2 matched with the working states of the self are obtained through analysis according to the body state information and the working state information, calculating a distance value between the self and each candidate conveying device 2;
s402 compares the magnitudes of all the distance values, and determines the candidate transmission device 2 with the lowest distance value as the target transmission device 2.
Specifically, if the application scene includes a plurality of conveyors 2, the transportation robot 1 may analyze, according to the body state information and the working state information of the conveyors 2, at least two candidate conveyors 2 matching the to-be-loaded/unloaded state of its own cargo, that is, when the transportation robot 1 is in the to-be-loaded state, there may be at least two conveyors 2 in a to-be-sent state, and when the transportation robot 1 is in the to-be-unloaded state, there may be at least two conveyors 2 in a to-be-received state. At this time, the transport robot 1 may perform communication interaction with all the candidate conveyors 2 matching its own operating state, and detect the signal strength when it communicates with each candidate conveyor 2, thereby calculating the distance value between itself and each candidate conveyor 2 according to the signal strength. Of course, it is also possible to transmit a detection signal (for example, laser or infrared ray, etc.) to each candidate conveyor 2, and after receiving the reflected detection signal, calculate a distance value between the transport robot 1 itself and each candidate conveyor 2 according to the transmission time of the transmitted detection signal, the reception time of the received reflected detection signal, and the transmission rate of the detection signal. In any case, the method of calculating the distance value between the transport robot 1 and each candidate conveyor 2 is within the scope of the present invention. After calculating the distance value between the transport robot 1 and each candidate transport apparatus 2, the transport robot compares the magnitude of all the distance values, and sets the candidate transport apparatus 2 corresponding to the smallest distance value as the target transport apparatus 2.
In this embodiment, when the transport robot 1 determines that there are at least two candidate conveyors 2, a downtime phenomenon that the target conveyor 2 that is docked with itself to complete the cargo handling operation cannot be selected is avoided. When at least two candidate conveying devices 2 are judged to be arranged, the candidate conveying device 2 with the minimum distance value can be selected as the target conveying device 2 independently, and the condition that the goods are conveyed by each conveying robot 1 in the whole goods conveying process at the minimum distance is guaranteed, so that the goods conveying efficiency is improved.
Based on the foregoing embodiment, the step S500 of obtaining the spatial position of the target transmission device 2, and navigating to the position of the target transmission device 2 specifically includes the steps of:
s510, detecting and identifying at least four target semantic points 22 of the target transmission device 2 in the target image through a visual detection algorithm; the at least four target semantic points 22 are the maximum outline vertices of the target transmission device 2, and the at least four target semantic points 22 are not coplanar;
s520, calculating to obtain a first space position of the target conveying device 2 according to the size information of the target conveying device 2;
s530 navigates to the location of the target transmission device 2 according to the first spatial location.
Specifically, as shown in fig. 2, after the transportation robot 1 determines the target transmission device 2, image recognition is performed according to the target image corresponding to the determined target transmission device 2, and since the angles of the target transmission device 2 in the target image captured and acquired by the transportation robot 1 may be different in an unconstrained scene, after the target transmission device 2 is recognized by a visual detection algorithm, at least four target semantic points 22 of the target transmission device 2 are regressively recognized. The semantic point is a point which can be described on the transmission device 2, is fixed and has high identifiability, and the semantic point can specifically describe the position of the transmission device 2 in the application scene, so that the target semantic point is the semantic point on the target transmission device 2. Due to the invariance of the transmitting device 2, the spatial position of the semantic point relative to the application scene is determined, and because the semantic point has high identifiability, the semantic point is easy to regress and recognize when recognizing the subsequent image relative to other points on the transmitting device 2. The more semantic points and the more accurate the semantic points (i.e., the closer the semantic points are to the maximum contour vertex of the conveyor 2), the more accurate the transport robot 1 calculates the first spatial position at which the target conveyor 2 is located.
After the transportation robot 1 shoots and acquires a target image, the transportation robot 1 carries out positioning and identification on the target image by the target transmission device 2 through the existing visual detection algorithm, and at least four target semantic points 22 of the target transmission device 2 are positioned and identified, wherein regression of the target semantic points 22 is similar to facial feature point regression identification of a human face. For example, the second preset neural network model is based on fast-rcnn, the backend network adopts mobilenetv2, the target transmission device 2 in the positioning target image is identified through the second preset neural network model, and then the positioning identification of at least four target semantic points 22 is carried out. Also for example, the second preset neural network model is based on R-CNN (or SPP-NET, Fast R-CNN, YOLO, SSD), the backend network uses mobilenetv2 (or mobilenetv1), the target transfer device 2 in the target image is identified and located through the second preset neural network model, and then the location identification of at least four target semantic points 22 is performed. Since the dimension of the target transferring device 2 is known, the spatial coordinates of at least four target semantic points 22 relative to the image acquisition module 11, that is, the spatial coordinates of the target transferring device 2 relative to the image acquisition module 11, can be directly calculated according to the EPnP algorithm by using the calibration result of the camera. Since the image acquisition module 11 is arranged at a fixed position of the transport robot 1, the spatial coordinates of the transport robot 1 relative to the image acquisition module 11 can be calculated, since the pixel coordinates of the image of the target transmission device 2 imaged in the image acquisition module 11 are known, the spatial coordinates of the target transmission device 2 on a world coordinate system are calculated according to the conversion relation between the world coordinate system and a camera coordinate system, and since the origin of establishing the world coordinate system is known, the first spatial position of the target transmission device 2 in an application scene can be obtained.
Through the embodiment, the constructed preset neural network model is composed of a basic structure and a rear-end network, the target transmission device 2 in the target image is identified through cascade regression, then the positioning identification of at least four target semantic points 22 is carried out, the mode of step-by-step regression identification is realized by firstly identifying the target transmission device 2 of the positioning target object through rough regression, then the target semantic points 22 are identified through fine regression, a cascade regression from rough to fine is formed, then the phenomenon of overfitting is effectively avoided, and the identification positioning speed and the identification positioning effect are greatly improved.
The training process for constructing and training the first preset neural network model and the second preset neural network model is the prior art, exemplarily, the target transmission device 2 is framed in advance and four target semantic points 22 are defined, training sample images of the target transmission device 2 which is calibrated in advance and the four target semantic points 22 are obtained, the training is carried out through the training sample images to construct the second preset neural network model, and then the second preset neural network model is identified according to the trained second preset neural network model. Similarly, the first preset neural network model is obtained by constructing and training in the above manner, and will not be described in detail here.
Based on the foregoing embodiment, the step S500 of obtaining the spatial position of the target transmission device 2, and navigating to the position of the target transmission device 2 specifically includes the steps of:
s540, emitting detection laser to the support legs 23 of the target conveying device 2, and acquiring the laser coordinate of each support leg 23 on a laser coordinate system;
s550, calculating according to the laser coordinates to obtain a second space position of the target conveying device 2;
s560 navigates to the location of the target delivery device 2 according to the second spatial location.
Specifically, the detection laser is transmitted to the legs 23 of the object transfer device 2 shown in fig. 2 by the laser transceiver device provided on the transport robot 1. As shown in fig. 3, Ow-XwYwZw is defined as a world coordinate system, Oc-XcYcZc is defined as a laser coordinate system, a laser emission direction is defined as an x-axis, a laser scanning direction is defined as a y-axis, the x-axis and the y-axis form a scanning plane, and a z-axis perpendicular to the scanning plane is defined as a z-axis. Assuming that the point P is any one of the center points of the legs 23 of the target transport apparatus 2, the transport robot 1 can calculate the spatial coordinates of the center point of the leg 23 on the world coordinate system. The relationship between the laser coordinate system and the world coordinate system is:
wherein, the conversion matrix is R, which comprises laser rotation matrix and laser translation matrix laser. [ m, n, 1] is the homogeneous coordinate of the laser coordinates of the laser point P' corresponding to the point P in the laser coordinate system, [ Xw, Yw, Zw, 1] is the homogeneous coordinate of the world coordinates of the point P in the world coordinate system. The laser rotation matrix and the laser translation matrix can be obtained by calculating through a plurality of groups of laser coordinates and world coordinates, which is not described in detail herein for the prior art. The world coordinate system is only constructed for better describing the space positions of the laser transceiver and the target transmission device 2, because the laser transceiver is fixedly arranged on the transportation robot 1, and the space position of the transportation robot 1 in the application scene is known, because the laser coordinate of the center point 231 of the leg 23 of the target transmission device 2 on the laser coordinate system is known, the space coordinate of the center point 231 of the leg 23 of the target transmission device 2 on the world coordinate system is calculated according to the conversion relation between the world coordinate system and the laser coordinate system, and because the origin of the established world coordinate system is known, the second space position of the center point 231 of the leg 23 of the target transmission device 2 in the application scene can be obtained.
Through the embodiment, the laser is adopted for positioning, so that the vertical accurate positioning can be performed in any environment and any place, the adaptability is strong, and the accuracy is high.
One embodiment of the present invention, as shown in fig. 4, is a dispatch control system of a transport robot 1, including: a plurality of transport robots 1 and a conveyor 2; the conveying device 2 is provided with an indicator light 21 for indicating the working state of the conveying device; each transport robot 1 includes: the system comprises an image acquisition module 11, a processing module 13, a detection module 12, an analysis module 14, a control module 15 and an execution module 16;
the detection module 12 is used for detecting the self working state to obtain the body state information;
the image acquisition module 11 is used for shooting and acquiring a target image, and the target image comprises a conveying device 2 and an indicator light 21 arranged at the conveying device 2;
the processing module 13 is connected with the image acquisition module 11 and used for analyzing the target image to obtain the working state information of the transmission device 2;
the analysis module 14 is respectively connected with the processing module 13 and the detection module 12 and is used for analyzing and obtaining the target transmission device 2 butted with the analysis module and the transmission task type of the analysis module according to the body state information and the working state information of the transmission device;
The control module 15 is connected with the analysis module 14 and used for acquiring the spatial position of the target conveying device 2 and navigating and moving to the position of the target conveying device 2;
and the execution module 16 is connected with the analysis module 14 and is used for docking with the target conveying device 2 according to the type of the conveying task after the target conveying device 2 is moved to the position, so as to complete loading or unloading of the goods.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the processing module 13 includes: a first image recognition unit and a first processing unit;
the first image recognition unit is used for carrying out indicator lamp state recognition on a target image by utilizing a preset neural network model through a visual detection algorithm;
the first processing unit is connected with the first image recognition unit and used for analyzing and obtaining the working state information of the conveying device 2 according to the state recognition result of the indicator light;
the indicating lamp state comprises an indicating lamp on-off state and an indicating lamp shape and color state; the indicator light shape and color state comprises an indicator light color state and an indicator light shape state.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, each transport robot 1 further includes: a wireless communication module;
and the wireless communication module is connected with the wireless communication modules of the other transport robots 1 and is used for transmitting and sharing the work state information of the transmission device 2 obtained by respective analysis.
Based on the foregoing embodiment, the analysis module 14 includes: a judging unit and a first determining unit;
the judging unit is used for judging whether the body is in a state to be loaded or unloaded according to the body state information;
the first determining unit is connected with the judging unit and used for determining that the conveying device 2 corresponding to the state of the working state information to be sent is a target conveying device 2 and determining that the type of the conveying task is a cargo receiving type when the first determining unit is in the state to be loaded; when the self is in a state of waiting to be unloaded, the conveyer 2 corresponding to the state of waiting to be received is determined as the target conveyer 2, and the type of the conveying task is determined as the type of cargo delivery.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the analysis module 14 further includes: a second processing unit and a second determining unit;
The second processing unit is used for calculating the distance value between the second processing unit and each candidate conveying device 2 and comparing the size of all the distance values when at least two candidate conveying devices 2 matched with the working state of the second processing unit are obtained through analysis according to the body state information and the working state information;
and the second determining unit is connected with the second processing unit and is used for determining the candidate transmission device 2 with the lowest distance value as the target transmission device 2.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the control module 15 includes: the second image recognition unit, the third processing unit and the first navigation moving unit;
a second image recognition unit for detecting and recognizing at least four target semantic points 22 of the target transmission device 2 in the target image by a visual detection algorithm; the target semantic point 22 is a fixed and highly recognizable point on the target transmission device 2;
the third processing unit is connected with the second image recognition unit and used for calculating and obtaining the first spatial position of the target conveying device 2 according to the size information of the target conveying device 2;
And the first navigation moving unit is connected with the third processing unit and is used for navigating and moving to the position of the target transmission device 2 according to the first space position.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the control module 15 includes: the laser detection unit, the fourth processing unit and the second navigation moving unit;
the laser detection unit is used for emitting detection laser to the support legs 23 of the target conveying device 2 and acquiring the laser coordinate of each support leg 23 on the laser coordinate system;
the fourth processing unit is connected with the laser detection unit and used for calculating a second space position of the target conveying device 2 according to the laser coordinates;
and the second navigation moving unit is connected with the fourth processing unit and is used for navigating and moving to the position of the target transmission device 2 according to the second space position.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.