CN114700964B - Intelligent auxiliary robot for container - Google Patents

Intelligent auxiliary robot for container Download PDF

Info

Publication number
CN114700964B
CN114700964B CN202210301724.4A CN202210301724A CN114700964B CN 114700964 B CN114700964 B CN 114700964B CN 202210301724 A CN202210301724 A CN 202210301724A CN 114700964 B CN114700964 B CN 114700964B
Authority
CN
China
Prior art keywords
container
algorithm
unlocking
target
assembly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210301724.4A
Other languages
Chinese (zh)
Other versions
CN114700964A (en
Inventor
叶茂
张永顺
韦彦光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guiyang Zhenxin Technology Co ltd
Original Assignee
Guiyang Zhenxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guiyang Zhenxin Technology Co ltd filed Critical Guiyang Zhenxin Technology Co ltd
Priority to CN202210301724.4A priority Critical patent/CN114700964B/en
Publication of CN114700964A publication Critical patent/CN114700964A/en
Application granted granted Critical
Publication of CN114700964B publication Critical patent/CN114700964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the field of robot equipment, in particular to an intelligent auxiliary robot for a container, which comprises a robot body, wherein a setting assembly, a camera, a path planning assembly, a master controller, a control assembly and a plurality of walking feet are arranged on the robot body, and electromagnet assemblies are arranged on the walking feet; shooting a target image by a camera and sending the target image to a main controller; the path planning component stores a target recognition algorithm and a plurality of path planning algorithms; the method comprises the steps that a main controller calls a target recognition algorithm to recognize target information on a target image, the main controller calls a path planning algorithm to carry out path planning according to layout information of a setting assembly to obtain planning path information, and the main controller generates gait planning information of a walking foot according to the planning path information and sends a control signal to an electromagnet assembly; the electromagnet assembly drives the walking foot to move according to the control signal. The application can update and upgrade the algorithm in a centralized way, does not need to carry the whole robot equipment to update and upgrade, and is more convenient to maintain.

Description

Intelligent auxiliary robot for container
Technical Field
The application relates to the field of robot equipment, in particular to an intelligent auxiliary robot for a container.
Background
The diversification and generalization of global trade makes shipping more and more popular, in order to ensure the integrity of goods in the shipping process, the goods need to be stored in containers first, then the containers are concentrated in ports and wharfs for inspection and stacking, and finally the containers are concentratedly loaded and transported. The global trade is generalized to make the number of containers at the port and dock become more and more, and the container volume is becoming larger and more, in order to improve the efficiency of container inspection stacking, the intelligent requirement of port and dock is becoming higher and more, for example uses robot to carry out operations such as inspection and stacking.
Twist locks on containers are relatively fine objects of operation, and require a certain amount of force to operate, so twist locks are commonly performed manually. However, in case of emergency such as the transmission of infectious diseases or germs by the carrying of objects, the manual operation of the twist-lock is extremely easy to be transmitted, and a functional robot is generated instead of manual work. Because the container is generally made of metal materials and other parts made of metal, the surface of the container is complex, and the stability of the robot walking on the surface of the container and the software development difficulty of other operation control functions are high. In order to realize the unmanned unlocking operation of the container twist lock, a robot special for the twist lock operation is urgently needed.
Disclosure of Invention
The application aims to provide an intelligent auxiliary robot for a container, so as to improve the operation efficiency of a twistlock, reduce the cost and ensure the safety of unlocking personnel of the twistlock of the container.
In order to achieve the above purpose, the application adopts the following technical scheme:
the robot comprises a robot body, wherein a setting assembly, a camera, a path planning assembly, a master controller, a control assembly and a plurality of walking feet are arranged on the robot body, and electromagnet assemblies are arranged on the walking feet;
the setting component is used for inputting the layout information of the container and sending the layout information to the main controller;
the camera shoots a target image in the moving process of the robot body and sends the target image to the main controller;
a path planning component storing a target recognition algorithm and a plurality of path planning algorithms;
the main controller is used for acquiring the layout information of the container from the setting assembly, acquiring a target image from the camera head end, calling a target recognition algorithm from the path planning assembly according to the acquired layout information and the target image, and recognizing target information on the target image, wherein the main controller calls any path planning algorithm in the path planning assembly according to the target information, inputs the target information into the path planning algorithm for path planning to obtain planning path information, and generates gait planning information of a walking foot according to the planning path information, and the main controller sends a control signal to the control assembly according to the gait planning information;
the control assembly is used for controlling the electromagnet assembly to work according to the control signal;
and the electromagnet assembly is adsorbed on the container or is disconnected according to the control of the control assembly.
The beneficial effect of this scheme is:
when the robot body walks on the container, identifying target information from the acquired target image; and aiming at each work of the robot body, the layout information of the container can be acquired, so that a path planning algorithm is acquired during each work, the selected path planning algorithm is combined with target information to carry out path planning, and finally gait planning information is generated according to the path planning, and the walking foot is controlled to move according to the gait planning information. The robot automatically plans the path and performs twist lock operation, improves the twist lock operation efficiency, reduces the cost and ensures the personal safety of twist lock operators.
The method has the advantages that a proper path planning algorithm can be selected from a plurality of path planning algorithms according to requirements, a plurality of robot devices are not required to be arranged for each algorithm, the cost is saved, meanwhile, a plurality of s path planning algorithms are integrated on the path planning assembly, the algorithms can be intensively updated and upgraded, the whole robot devices are not required to be carried for updating and upgrading, the updated and upgraded path planning assembly can be directly replaced on the robot, the use of the robot is not delayed, and the maintenance is more convenient.
Further, be equipped with the unblock arm on the robot body, be equipped with unlocking device on the unblock arm, unlocking device includes unblock U-shaped claw, first strain sensor and distance sensor, distance sensor is used for detecting the interval in container connection gap, first strain sensor detects unblock U-shaped claw and twists lock unblock reverse thrust, when the robot body walks to target department, master controller control unblock arm is in order to detect container interlayer container connection gap by distance sensor, after detecting container connection gap, master controller control unblock U-shaped claw is promoted to twistlock target position to the unblock is successful is ensured by the unblock reverse thrust that first strain sensor detected.
The beneficial effects are that: through unlocking device's distance sensor, can pinpoint the container connection gap between the container layer to the stability when guaranteeing the robot body operation.
Further, the electromagnet assembly comprises an electromagnet body, a magnet controller, a magnet driver and a second strain sensor, wherein the second strain sensor is arranged on the end face of the electromagnet body, facing one side of the container, the electromagnet body is in signal connection with the magnet driver, the magnet driver is in signal connection with the magnet controller, and the magnet controller is in signal connection with the control assembly.
The beneficial effects are that: through the setting of each part of electro-magnet subassembly, the adsorption condition when walking foot walks on the metal wall of container can be accurately detected, the stability of robot body removal on the metal wall is guaranteed.
Further, all be equipped with a plurality of foot steering engines on the walking foot, be equipped with rotatory steering engine on the unblock arm, rotatory steering engine is connected with rotatory steering engine driver, the unblock arm is equipped with the arm steering engine, the arm steering engine is connected with arm steering engine driver, control assembly acquires control signal and controls foot steering engine and electro-magnet body according to control signal and move in coordination according to gait planning information.
The beneficial effects are that: through a plurality of foot steering engines on the walking foot, can guarantee a plurality of degrees of freedom of walking foot to and the hand steering engine on the unblock arm, can keep the unblock arm to operate the degree of freedom, prevent that the operation from appearing chaotic.
Further, the path planning algorithm comprises Dijkstra algorithm, optimal priority searching algorithm and A-type algorithm.
The beneficial effects are that: by setting a plurality of path planning algorithms, the corresponding algorithm can be directly selected according to the environment and the requirements during actual use, and the method is more suitable for application in the actual environment.
Further, the object recognition algorithm is a YOLO object recognition algorithm based on deep learning.
The beneficial effects are that: the object can be accurately identified through a YOLO object identification algorithm based on deep learning.
Further, the YOLO target recognition algorithm detects a target by setting three feature maps with different scales by using a YOLO3 network, and sets a shortcut link between preset layers of the YOLO3 network.
The beneficial effects are that: target detection is carried out through feature graphs with different scales, so that the accuracy of target detection is improved under the condition that the robot body is used as the computing power of the edge equipment, and the correction effect of the model on the edge equipment under the condition of deeper layer number can be improved through a shortcut link.
Further, the unlocking mechanical arm comprises a shell, the rotary steering engine is located between shell and the unlocking device, be equipped with electric putter in the shell, the unblock U-shaped claw is located electric putter and stretches out on the tip of shell, the rotatory groove of L shape has been seted up on the shell, the rotatory groove extends on two mutually perpendicular's of shell lateral wall, be fixed with the stop collar on the electric putter, the spacing groove has been seted up on the stop collar, the joint has the rotatory buckle that stretches out rotatory groove in the spacing inslot of stop collar, distance sensor is located on the shell, set firmly the accommodate motor who adjusts rotatory buckle position in the shell, accommodate motor signal connection master controller, accommodate motor's output shaft is connected with rotatory buckle.
The beneficial effects are that: the rotary buckle is arranged on the shell at the end part of the unlocking mechanical arm, so that when the twist lock on the container is operated, the position of the rotary buckle is adjusted by driving the limiting sleeve and the rotary buckle through the adjusting motor, the rotary buckle is clamped into the connecting gap of the container, the unlocking mechanical arm is supported by the lever, and the rotary buckle is fixed; the first strain sensor on the rotary clamping buckle accurately collects reverse thrust of the push rod, so that unlocking is assisted. The weight load applied to the robot body when the twist lock is operated by the unlocking mechanical arm is reduced through the rotary buckle, and the stability of the robot body on the surface of the container is maintained.
Drawings
Fig. 1 is a front view of an embodiment of the intelligent auxiliary robot for a container of the present application.
FIG. 2 is a top view of the housing of the unlocking device of FIG. 1;
FIG. 3 is an exterior view of the housing of the unlocking device of FIG. 1;
fig. 4 is a schematic block diagram of an embodiment of the container intelligent auxiliary robot of the present application.
Fig. 5 is a schematic diagram of the result of the path planning algorithm in the embodiment of the container intelligent auxiliary robot of the present application.
Fig. 6 is a schematic diagram of the result of Dijkstra algorithm in an embodiment of the container intelligent auxiliary robot of the present application.
FIG. 7 is a schematic diagram of the results of the BFS algorithm in the example of the intelligent auxiliary robot for containers of the present application;
FIG. 8 is a network structure of Darknet-53 of the target recognition algorithm in the container intelligent auxiliary robot embodiment of the present application;
FIG. 9 is a diagram showing the structure of residual components in a target recognition algorithm in an embodiment of the intelligent container auxiliary robot of the present application;
FIG. 10 shows detection results of YOLO3 under different scales in a target recognition algorithm in an embodiment of a container intelligent auxiliary robot of the application;
FIG. 11 is a graph showing the comparison of input and output results at multiple scales in an embodiment of the intelligent auxiliary robot for containers according to the present application.
Detailed Description
Further details are provided below with reference to the specific embodiments.
Reference numerals in the drawings of the specification include: the unlocking U-shaped claw 1, a first strain sensor 2, a distance sensor 3, a rotary buckle 4, an electric push rod 5, a magnet universal joint 7, an electromagnet body 8, a walking foot 9, a shell 10, an unlocking mechanical arm 11, a rotary groove 12, a limiting sleeve 13 and an adjusting motor 14.
Examples
Container intelligent auxiliary robot as shown in fig. 1, 2, 3 and 4: the robot comprises a robot body, wherein a setting component, a camera, a path planning component, a master controller, a control component and a plurality of walking feet 9 are arranged on the robot body. The walking foot 9 sets up six, and every walking foot 9 has three activity degrees of freedom, all installs a plurality of foot steering engines on the walking foot 9, and foot steering engine and activity degree of freedom one-to-one.
The walking foot 9 is provided with an electromagnet assembly, the electromagnet assembly obtains control signals and is adsorbed on a container or is disconnected according to the control of the control assembly, the electromagnet assembly comprises an electromagnet body 8, a magnet controller, a magnet driver and a second strain sensor, the second strain sensor is arranged on the end face of the electromagnet body 8, which faces one side of the container, the electromagnet body 8 is connected with the magnet driver through a magnet universal joint 7, the electromagnet body 8 is arranged on the end part of the walking foot 9, the magnet driver is connected with the magnet controller through a magnet universal joint 7, the magnet controller is connected with the control assembly through a signal, and the control assembly obtains control signals and controls the foot steering engine and the electromagnet body 8 to cooperatively move according to gait planning information according to the control signals.
The setting component is used for inputting the layout information of the container and sending the layout information to the main controller, the setting component can send the layout information to the main controller through buttons or keys, and the main controller can use the control module of the existing jetson TX2 model.
The path planning component stores a target recognition algorithm and a plurality of path planning algorithms, wherein the path planning algorithms comprise Dijkstra algorithm, optimal priority searching algorithm and A-type algorithm, and the target recognition algorithm is a YOLO target recognition algorithm based on deep learning.
As shown in fig. 2 and 3, the robot body is provided with an unlocking mechanical arm 11, and the structure of each degree of freedom movement of the unlocking mechanical arm 11 is conventional and will not be described herein. The camera shoots a target image in the moving process of the robot body and sends the target image to the main controller, and the camera is installed on the unlocking mechanical arm 11 through the screws and the gaskets.
Install unlocking device on the unblock arm 11, unblock arm 11 includes shell 10, unlocking device includes unblock U-shaped claw 1, first strain sensor 2, clamping force sensor and distance sensor 3, be equipped with electric putter 5 in the shell 10, unblock U-shaped claw 1 fixed connection stretches out on electric putter 5 on the tip of shell 10, electric putter 5 passes through gasket and screw fixation on shell 10 inner wall, install rotatory steering wheel between shell 10 and the unblock arm 11, rotatory steering wheel drives shell 10 and its interior part and rotates, in order to let unblock U-shaped claw 1 be stretched out the push rod after being rotated to suitable angle again.
The clamping force sensor is arranged on the inner wall of the unlocking U-shaped claw 1, and detects the operating pressure when the unlocking U-shaped claw 1 performs the twist lock operation, and the clamping force sensor detects the operating pressure when the unlocking U-shaped claw 1 performs the twist lock operation; the main controller obtains the operating pressure of the clamping force sensor, and controls the electric push rod 5 to drive the unlocking U-shaped claw 1 to perform twist lock operation according to the operating pressure.
The shell 10 is provided with an L-shaped rotary groove 12, the rotary groove 12 extends on two mutually perpendicular side walls of the shell 10, the shell 10 is internally and fixedly provided with an adjusting motor 14, the adjusting motor 14 can be made of the existing type rotary motor products, and the adjusting motor 14 is in signal connection with a main controller. Install stop collar 13 in the shell 10, regulating motor 14's output shaft passes through the mounting disc and fix with screw on rotatory buckle 4, the spacing groove has been seted up in the stop collar 13, stop collar 13 welding is on electric putter 5, in order to sense the reverse thrust of unblock when electric putter 5 promotes unblock U-shaped claw 1, the joint has rotatory buckle 4 that stretches out rotatory groove 12 in the spacing inslot of stop collar 13, first strain sensor 2 fixed mounting is on rotatory buckle 4, first strain sensor 2 is used for detecting the reverse thrust of unblock when electric putter 5 promotes, the reaction force that the response was applyed to rotatory buckle 4 promptly, the reverse thrust of unblock promptly, and send to the master controller.
The distance sensor 3 fixed mounting is on shell 10, and distance sensor 3 is located rotatory groove 12 below department, and distance sensor 3 can use current ultrasonic ranging sensor, and distance sensor 3 is used for detecting the interval of container connection gap, and container connection gap is used for placing rotatory buckle 4 as push rod supporting point, and the interval is the distance in gap between the container, prevents that the gap between the container from being too big to be unfavorable for the fixed of robot body position. The distance sensor 3 detects the interval and sends to the master controller, the master controller compensates the correction to the interval according to current compensation algorithm, the master controller judges whether the interval is located the scope of predetermineeing, the scope of predetermineeing can not drop the clearance and set up when operating according to the robot body twistlock, when the interval is located the scope of predetermineeing, master controller control regulating motor 14 starts, regulating motor 14 starts and drives rotatory buckle 4 and rotate into container connection gap, master controller control unblock arm 11 carries out work, the master controller judges whether unblock U-shaped claw 1 goes out touching the twistlock and withdraws unblock arm 11 according to the reaction force of first strain sensor 2.
The hand steering wheel is installed on the unlocking mechanical arm 11, the steering wheel driver is installed on the robot body, the foot steering wheel signal is connected with the steering wheel driver, the hand steering wheel signal is connected with the steering wheel driver, the steering wheel driver is connected with the master controller, the unlocking mechanical arm 11 is provided with the locking steering wheel, and the locking steering wheel signal is connected with the steering wheel driver.
The Dijkstra algorithm is an existing algorithm, and the planning principle is as shown in fig. 5 and 6: the Dijkstra algorithm accesses nodes in the graph starting from the initial point where the object is located. It iteratively examines the nodes in the set of nodes to be examined and adds the closest to the node that has not yet been examined to the set of nodes. The node set extends outward from the initial node until the target node is reached. The Dijkstra algorithm ensures that a shortest path from the initial point to the target point can be found, as long as all edges have a non-negative cost value. In fig. 4, the node located above the dotted line is the initial node, the node located below the dotted line is the target point, and the diamond-like area is the area scanned by Dijkstra's algorithm. The areas of the lightest color are those furthest from the initial point and thus form the border of the detection process.
The best-first search (BFS) algorithm is an existing algorithm, and its planning principle is as shown in fig. 7: the best-first search (BFS) algorithm operates in a similar flow, except that it is able to evaluate the cost of any node (called heuristic) to the target point. Instead of selecting the node closest to the original node, it selects the node closest to the target. BFS cannot guarantee that a shortest path is found. However, it is much faster than the Dijkstra algorithm because it uses a heuristic function (heuristics) to quickly steer to the target node. For example, if the target is located south of the departure point, the BFS will tend to direct to the path south. In the following graph, nodes near the surroundings of the dotted line represent higher heuristic values (high cost of moving to the target), while nodes near the surroundings of the dotted line represent lower heuristic values (low cost of moving to the target). This shows that BFS runs faster than the Dijkstra algorithm.
The algorithm a is an existing algorithm, and the planning principle is as shown in fig. 5: the algorithm calculates the priority of each node through a preset function model, wherein the preset function model is as follows: f (n) =g (n) +h (n). Wherein: f (n) is the comprehensive priority of the node n, and when the next node to be traversed is selected, the node with the highest comprehensive priority (the smallest value) is always selected; g (n) is the cost of node n from the start point. h (n) is the expected cost of the node n from the endpoint, which is the heuristic function of the a-algorithm. In the operation process of the algorithm, a node with the smallest f (n) value (highest priority) is selected from the priority queue each time and used as a next node to be traversed.
Regarding heuristic functions: in the extreme case, when the heuristic h (n) is always 0, the priority of the node will be determined by g (n), and the algorithm is degraded into Dijkstra algorithm. If h (n) is always less than or equal to the cost of node n to the endpoint, then the a algorithm guarantees that the shortest path can be found. But as the value of h (n) is smaller, the algorithm will traverse more nodes, resulting in a slower algorithm. If h (n) is exactly equal to the cost of node n to the endpoint, the a algorithm will find the best path and the speed is fast, but this can be done in not all scenarios, because it is difficult to exactly calculate how far from the endpoint before the endpoint is reached. If the value of h (n) is greater than the cost of node n to the endpoint, the a algorithm cannot guarantee that the shortest path is found, but it is fast at this time. In the other extreme case, if h (n) is much larger than g (n), then only h (n) is effective, which becomes the best preferential search.
Thus, the speed and accuracy of the algorithm can be controlled by adjusting the heuristic. Because in some cases the shortest path may not be necessary, but it is desirable to be able to find a path as soon as possible. This is where the a-algorithm is flexible.
The target recognition algorithm is performed by recognizing the yellow cap of the pull rope on the twist lock to target the yellow cap for the operation target recognition of the twist lock operation on the container. The method comprises the steps that a main controller obtains layout information and a target image, the main controller calls a target recognition algorithm from a path planning component to recognize target information on the target image, the target information comprises an included angle between a camera lens and a target, the main controller calls a path planning algorithm in the path planning component according to the target information, the target information is input into the path planning algorithm to carry out path planning, planning path information is obtained, the main controller generates gait planning information of a walking foot 9 according to the planning path information, and the generation process of the gait planning information is as follows: the main controller calculates the linear distance between the camera and the target by using the pitch angle of the target obtained by the camera according to the position of the actual target and the pitch angle and the Pythagorean theorem, and fits and forms a gait instruction group reaching the target by using different step sizes so as to realize the straight movement of the robot body; the robot body is provided with an IMU sensor, the steering motion of the robot body passes through the IMU sensor on the robot body, the IMU sensor is an inertial measurement sensor, the angle difference between the robot body and the target azimuth is calculated, and the main controller fits the steering gait instruction group by using the angle difference to realize the steering motion of the robot body. The main controller sends control signals to the electromagnet assembly and the control assembly according to gait planning information, namely, the main controller sends power-off control signals to the electromagnet assembly according to the action of lifting the walking foot 9 when the robot body walks.
The principle of the target recognition algorithm is: as shown in fig. 8, for a target image, an existing YOLO3 network is utilized and three feature graphs with different scales are set to perform target detection, the YOLO3 network adopts a network structure called dark net-53 (including 53 convolution layers), shortcut links (shortcut connections) are set between preset layers according to actual requirements, for example, the preset layers are 5 th to 13 th layers, the shortcut links are set up between the 5 th to 13 th layers, the method is that the result of the 13 th layer after back propagation is directly transmitted to the 5 th layer, the back propagation result of the 13 th layer is received at the 5 th layer, meanwhile, the back propagation result of the 6 th layer is also received, loss of the layer parameters to the whole loss is calculated through result superposition, and loss gradients are calculated to find the descending direction. In order to prevent the number of model layers from being too deep, the correction effect from the back propagation value of the bottommost layer to the higher layer is poor, so that the shortcut link is set so that the parameters of the higher layer can be corrected.
Due to the problem of computational inefficiency on the edge devices of the robot body, channels are pruned at the convolutional layer using neural network models to describe the number of insignificant channels that may be deleted later.
Each channel is assigned a scaling factor, the importance of which is expressed in absolute value of the scaling factor. Each convolution layer in YOLO is followed by a BN layer (batch normalization layer) in addition to the detection head to speed up convergence and improve generalization capability. The output of the convolution feature is normalized at the BN layer using the mean and variance in the small batch of input features, expressed as:
wherein,,sum sigma 2 The mean and variance of a small batch of input features, gamma and beta, respectively, represent the trainable scale factor and bias. Therefore, the trainable scaling factor in the BN layer may be directly employed as an indicator for determining the importance of the channel. To effectively distinguish important channels from non-important channels, channel-level sparse training may be performed by applying L1 regularization over the scaling factor γ, the loss function of which is expressed as:
where f (γ) = |γ| represents the L1 norm and α represents the penalty factor used to balance the two losses. In an implementation, a sub-gradient is used to optimize the non-smooth L1 penalty term.
After sparse training, a global threshold is introducedTo determine whether or notThe characteristic channel is deleted. Global threshold->Is set to n percent of all Y to control the overall pruning rate. A local security threshold pi is introduced to maintain the integrity of the network connection to prevent the channels of a certain convolutional layer from being completely cut off. The local safety threshold pi is set to be a value gamma corresponding to a percentage of the number of channels to be reserved in each layer. For scaling factor gamma smaller than the global threshold +.>And pruning is carried out on the characteristic channel with the minimum value of the local safety threshold pi. In YOLO, several special connections between layers need to be carefully handled, e.g. route layer and shortcut layer. First, according to the global threshold->And the local security threshold pi constructs a pruning mask for all convolutional layers. For the route layer, the pruning masks of a plurality of input layers are connected in series in sequence, and the mask connected in series is used as the pruning mask of the layer.
The shortcut layer in YOLO plays a similar role as in ResNet. All layers with connections to the shortcut layer have the same number of channels. To match the feature channels of each layer connected by the shortcut layers, the pruning masks of all connected layers are traversed and an OR operation is performed thereon to generate the final pruning masks for the connected layers.
When pruning is carried out, iterative pruning is carried out to avoid excessive pruning and catastrophic degradation of model accuracy; in the test of the data set COCO2017, the same training hyper-parameters are used for direct training, and fine tuning is performed on the pruned model to restore the initial training accuracy as much as possible. All parts not stated as innovations are existing algorithms.
As shown in fig. 9 and 10, after 79 layers, the convolutional network passes through the convolutional layer of fig. 10 below to obtain a scale detection result. The feature map used for detection here is downsampled 32 times compared to the input image. For example, if the input is 416×416, the feature map is 13×13. Because the downsampling multiple is high, the receptive field of the feature map is relatively large, so that the method is suitable for detecting objects with relatively large sizes in the image.
In order to realize fine granularity detection, the 79 th layer characteristic diagram starts up sampling (up sampling convolution from 79 layers to the right), then is fused with the 61 st layer characteristic diagram (linkage), so as to obtain the 91 st layer characteristic diagram with finer granularity, and the characteristic diagram which is sampled 16 times down relative to the input image is obtained after a plurality of convolution layers. It has a mesoscale receptive field suitable for detecting mesoscale objects.
Finally, the 91 st layer feature map is up-sampled again and fused (localization) with the 36 th layer feature map, and finally the feature map which is down-sampled by 8 times relative to the input image is obtained. Its receptive field is minimal and is suitable for detecting small-sized objects.
As the number and scale of the feature maps output change, the size of the prior frame also needs to be correspondingly adjusted. YOLO2 has begun to use K-means clustering to obtain the size of a priori frame, YOLO3 continues this approach, setting 3 a priori frames for each downsampling scale, and clustering a total of 9 a priori frames sizes. The 9 a priori boxes in the COCO dataset are: (10 x 13), (16 x 30), (33 x 23), (30 x 61), (62 x 45), (59 x 119), (116 x 90), (156 x 198), (373 x 326).
As shown in fig. 11, a larger prior box (116 x 90), (156 x 198), (373 x 326) is applied on the smallest 13 x13 feature map (with the largest receptive field) for allocation, which is suitable for detecting larger objects. A moderate prior box (30 x 61), (62 x 45), (59 x 119) is applied to the moderate 26 x 26 profile (moderate receptive field) and is suitable for detecting moderate size subjects. The larger 52 x 52 feature map (smaller receptive field) uses smaller a priori boxes (10 x 13), (16 x 30), (33 x 23) suitable for detection of smaller objects.
For example: for an input image of 416 x 416, 3 prior frames are set up for each grid of feature maps for each scale, for a total of 13 x 3+26 x 26 x 3+52 x 52 x 3= 10647 predictions. Each prediction is a (4+1+80) =85-dimensional vector, which 85-dimensional vector contains the frame coordinates (4 values), the frame confidence (1 value), the probability of the object class (80 objects for the COCO dataset). In contrast, YOLO2 employs 13×13×5=845 predictions, and the number of frames predicted by YOLO3 is increased by more than 10 times, and the prediction is performed at different resolutions, so that the detection effect of the mAP and the small object is improved to some extent. mAP mean Average Precision, the average value of each AP;
the specific implementation process is as follows:
when the robot body walks on the container, identifying target information, namely identifying yellow caps on the twistlocks, from the acquired target image; and aiming at each work of the robot body, a path planning algorithm can be obtained through the selected or set layout information during each work, the selected path planning algorithm is combined with target information to carry out path planning, gait planning information is finally produced according to the path planning, and foot steering engine movement of the walking foot 9 is controlled according to the gait planning information until the walking foot 9 moves to a twist lock position. When the walking foot 9 is controlled to move, the electromagnet body 8 is powered off, the magnet adsorption of the walking foot 9 is released, leg lifting and walking are performed, after the walking foot 9 is put down, the electromagnet body 8 is powered on, the magnet adsorption work of the walking foot 9 is performed, the electromagnet body is adsorbed on the surface of a container, and meanwhile, the force adsorbed by the magnet is detected through the second strain sensor, so that whether walking is stable or not is judged.
When the robot body reaches a target position, namely a twist lock position, the main controller moves the mechanical arm according to target information to drive the unlocking device to move towards the target position, meanwhile, data are collected through the distance sensor 3 to check a gap between container layers (a container connecting gap), if the gap between the container layers is found, the main controller drives the adjusting motor 14 (namely a buckle steering engine) to rotationally clamp into the container connecting gap, the rotating buckle 4 supports the robot body and serves as a lever to support the unlocking mechanical arm 11, after rotation, the unlocking mechanical arm 11 is moved to judge whether the state is in a fitting state through a second strain sensor on the rotating buckle 4, and if the state is in a fitting state, the main controller drives the electric push rod 5 to enable the U-shaped grab at the top end of the electric push rod 5 to be close to the tail end of a twist lock pull rope; the main controller controls the rotary steering engine to drive the unlocking U-shaped claw 1 to rotate to a proper working angle according to the position of the target, and the rotary steering engine uses the existing rotary motor to rotate the unlocking U-shaped claw 1 according to the included angle between the camera lens and the target; the twist lock is unlocked through the thrust of the electric push rod 5, the main controller obtains the operation thrust detected by the clamping force sensor in the pushing process of the electric push rod 5, and after the operation thrust reaches a preset amount, the unlocking is judged to be completed, and the unlocking mechanical arm 11 is retracted. The twist lock on the container is automatically unlocked, so that the problems of labor consumption and easy misoperation during manual operation are avoided.
According to the method and the device, the proper path planning algorithm can be selected from the plurality of path planning algorithms according to the requirement, a plurality of robot devices are not required to be arranged for each algorithm, the cost is saved, meanwhile, the plurality of path planning algorithms are integrated on the path planning assembly, the algorithms can be updated and upgraded in a centralized mode, the whole robot devices are not required to be carried for updating and upgrading, and maintenance is more convenient.
The foregoing is merely exemplary of the present application, and specific technical solutions and/or features that are well known in the art have not been described in detail herein. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, and these should also be regarded as the protection scope of the present application, which does not affect the effect of the implementation of the present application and the practical applicability of the patent. The protection scope of the present application is subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (7)

1. Container intelligence auxiliary robot, including the robot body, its characterized in that: the robot body is provided with a setting assembly, a camera, a path planning assembly, a main controller, a control assembly and a plurality of walking feet, wherein electromagnet assemblies are arranged on the walking feet;
the setting component is used for inputting the layout information of the container and sending the layout information to the main controller;
the camera shoots a target image in the moving process of the robot body and sends the target image to the main controller;
a path planning component storing a target recognition algorithm and a plurality of path planning algorithms;
the main controller is used for acquiring the layout information of the container from the setting assembly, acquiring a target image from the camera head end, calling a target recognition algorithm from the path planning assembly according to the acquired layout information and the target image, and recognizing target information on the target image, wherein the main controller calls any path planning algorithm in the path planning assembly according to the target information, inputs the target information into the path planning algorithm for path planning to obtain planning path information, and generates gait planning information of a walking foot according to the planning path information, and the main controller sends a control signal to the control assembly according to the gait planning information;
the control assembly is used for controlling the electromagnet assembly to work according to the control signal;
the electromagnet assembly is adsorbed on the container or disconnected according to the control of the control assembly;
the novel container unlocking device comprises a robot body, and is characterized in that an unlocking mechanical arm is arranged on the robot body, an unlocking U-shaped claw, a first strain sensor and a distance sensor are arranged on the unlocking mechanical arm, the distance sensor is used for detecting the distance between container connecting gaps, the first strain sensor detects the unlocking U-shaped claw to conduct torsion lock unlocking reverse thrust, when the robot body walks to a target, the main controller controls the unlocking mechanical arm to detect container connecting gaps between container layers through the distance sensor, and after the container connecting gaps are detected, the main controller controls the unlocking U-shaped claw to be pushed to a torsion lock target position, and unlocking success is guaranteed through unlocking reverse thrust detected by the first strain sensor.
2. The container intelligent auxiliary robot according to claim 1, wherein: the electromagnet assembly comprises an electromagnet body, a magnet controller, a magnet driver and a second strain sensor, wherein the second strain sensor is arranged on the end face of the electromagnet body, facing one side of the container, the electromagnet body is in signal connection with the magnet driver, the magnet driver is in signal connection with the magnet controller, and the magnet controller is in signal connection with the control assembly.
3. The container intelligent auxiliary robot according to claim 2, wherein: the walking robot is characterized in that a plurality of foot steering gears are arranged on the walking foot, a rotary steering gear is arranged on the unlocking mechanical arm and connected with a rotary steering gear driver, a hand steering gear is arranged on the unlocking mechanical arm, a steering gear driver is arranged on the robot body, the foot steering gear driver is connected with a foot steering gear signal, the hand steering gear signal is connected with the steering gear driver, the steering gear driver is connected with a master controller, and the control assembly acquires control signals and controls the foot steering gear and the electromagnet body to cooperatively move according to gait planning information according to the control signals.
4. The container intelligent auxiliary robot according to claim 1, wherein: the path planning algorithm comprises Dijkstra algorithm, optimal priority searching algorithm and A algorithm.
5. The intelligent auxiliary robot for a container according to claim 4, wherein: the target recognition algorithm is a YOLO target recognition algorithm based on deep learning.
6. The intelligent auxiliary robot for a container according to claim 5, wherein: the YOLO target recognition algorithm utilizes a YOLO3 network, detects targets by setting three feature maps with different scales, and sets shortcut links between preset layers of the YOLO3 network.
7. A container intelligent auxiliary robot according to claim 3, wherein: the unlocking mechanical arm comprises a shell, the rotary steering engine is located between the shell and the unlocking device, an electric push rod is arranged in the shell, an unlocking U-shaped claw is located at the end, extending out of the shell, of the electric push rod, an L-shaped rotary groove is formed in the shell, the rotary groove extends on two side walls of the shell, a limiting sleeve is fixed on the electric push rod, a limiting groove is formed in the limiting sleeve, a rotary buckle extending out of the rotary groove is clamped in the limiting groove of the limiting sleeve, a distance sensor is located on the shell, an adjusting motor for adjusting the position of the rotary buckle is fixedly arranged in the shell, a main controller is connected with a signal of the adjusting motor, and an output shaft of the adjusting motor is connected with the rotary buckle.
CN202210301724.4A 2022-03-24 2022-03-24 Intelligent auxiliary robot for container Active CN114700964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210301724.4A CN114700964B (en) 2022-03-24 2022-03-24 Intelligent auxiliary robot for container

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210301724.4A CN114700964B (en) 2022-03-24 2022-03-24 Intelligent auxiliary robot for container

Publications (2)

Publication Number Publication Date
CN114700964A CN114700964A (en) 2022-07-05
CN114700964B true CN114700964B (en) 2023-09-22

Family

ID=82170273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210301724.4A Active CN114700964B (en) 2022-03-24 2022-03-24 Intelligent auxiliary robot for container

Country Status (1)

Country Link
CN (1) CN114700964B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101537618A (en) * 2008-12-19 2009-09-23 北京理工大学 Visual system for ball picking robot in stadium
CN106354161A (en) * 2016-09-26 2017-01-25 湖南晖龙股份有限公司 Robot motion path planning method
CN106774315A (en) * 2016-12-12 2017-05-31 深圳市智美达科技股份有限公司 Autonomous navigation method of robot and device
CN109131621A (en) * 2018-09-04 2019-01-04 洛阳清展智能科技有限公司 A kind of bionic 6-leg formula boiler of power plant water-cooling wall Measuring error climbing robot
CN109828578A (en) * 2019-02-22 2019-05-31 南京天创电子技术有限公司 A kind of instrument crusing robot optimal route planing method based on YOLOv3
CN111203911A (en) * 2020-02-25 2020-05-29 广东博智林机器人有限公司 Linear motion execution device and reinforcing steel bar processing equipment
CN111845405A (en) * 2020-06-30 2020-10-30 南京工程学院 Control method of mobile charging pile and mobile charging system
JP2021122879A (en) * 2020-02-04 2021-08-30 株式会社メイキコウ Image processing device and carrier device equipped with image processing device
CN113456429A (en) * 2021-07-30 2021-10-01 刘未艾 Limb rehabilitation robot and using method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200039076A1 (en) * 2016-03-04 2020-02-06 Ge Global Sourcing Llc Robotic system and method for control and manipulation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101537618A (en) * 2008-12-19 2009-09-23 北京理工大学 Visual system for ball picking robot in stadium
CN106354161A (en) * 2016-09-26 2017-01-25 湖南晖龙股份有限公司 Robot motion path planning method
CN106774315A (en) * 2016-12-12 2017-05-31 深圳市智美达科技股份有限公司 Autonomous navigation method of robot and device
CN109131621A (en) * 2018-09-04 2019-01-04 洛阳清展智能科技有限公司 A kind of bionic 6-leg formula boiler of power plant water-cooling wall Measuring error climbing robot
CN109828578A (en) * 2019-02-22 2019-05-31 南京天创电子技术有限公司 A kind of instrument crusing robot optimal route planing method based on YOLOv3
JP2021122879A (en) * 2020-02-04 2021-08-30 株式会社メイキコウ Image processing device and carrier device equipped with image processing device
CN111203911A (en) * 2020-02-25 2020-05-29 广东博智林机器人有限公司 Linear motion execution device and reinforcing steel bar processing equipment
CN111845405A (en) * 2020-06-30 2020-10-30 南京工程学院 Control method of mobile charging pile and mobile charging system
CN113456429A (en) * 2021-07-30 2021-10-01 刘未艾 Limb rehabilitation robot and using method thereof

Also Published As

Publication number Publication date
CN114700964A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
US11906961B2 (en) Systems and methods for unmanned vehicles having self-calibrating sensors and actuators
KR101813697B1 (en) Unmanned aerial vehicle flight control system and method using deep learning
US10065654B2 (en) Online learning and vehicle control method based on reinforcement learning without active exploration
Teuliere et al. Chasing a moving target from a flying UAV
CN105652891A (en) Unmanned gyroplane moving target autonomous tracking device and control method thereof
US11454974B2 (en) Method, apparatus, device, and storage medium for controlling guide robot
US11334085B2 (en) Method to optimize robot motion planning using deep learning
Liang et al. Unmanned aerial transportation system with flexible connection between the quadrotor and the payload: modeling, controller design, and experimental validation
US20230087467A1 (en) Methods and systems for modeling poor texture tunnels based on vision-lidar coupling
CN114700964B (en) Intelligent auxiliary robot for container
US20230026394A1 (en) Pose detection of an object in a video frame
Piperakis et al. Outlier-robust state estimation for humanoid robots
US20240217660A1 (en) State information and telemetry for suspended load control equipment apparatus, system, and method
US11315279B2 (en) Method for training a neural convolutional network for determining a localization pose
US20220315220A1 (en) Autonomous Aerial Navigation In Low-Light And No-Light Conditions
EP4175907B1 (en) Method and apparatus for relative positioning of a spreader
CN117963084A (en) Method and device for deploying and recovering underwater robot and mother ship
Fukao et al. Tracking control of an aerial blimp robot based on image information
KR102547412B1 (en) Method and apparatus for providing gimbal control information of flight platform
US11992444B1 (en) Apparatus, system, and method to control torque or lateral thrust applied to a load suspended on a suspension cable
US20230004170A1 (en) Modular control system and method for controlling automated guided vehicle
US20240210955A1 (en) Controller and method
CN110703786B (en) Mooring rotor wing platform retraction controller and method
US20230244750A1 (en) Computer-Implemented Symbolic Differentiation Using Chain Rule
Ma et al. WIRELESS VISUAL SERVOING FOR ODIS–AN UNDER CAR INSPECTION MOBILE ROBOT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant