CN116563835A - Transfer method, transfer device and electronic device - Google Patents

Transfer method, transfer device and electronic device Download PDF

Info

Publication number
CN116563835A
CN116563835A CN202310533520.8A CN202310533520A CN116563835A CN 116563835 A CN116563835 A CN 116563835A CN 202310533520 A CN202310533520 A CN 202310533520A CN 116563835 A CN116563835 A CN 116563835A
Authority
CN
China
Prior art keywords
image
transported
transfer
mask
moving speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310533520.8A
Other languages
Chinese (zh)
Other versions
CN116563835B (en
Inventor
崔致豪
徐保伟
邵天兰
丁有爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202310533520.8A priority Critical patent/CN116563835B/en
Publication of CN116563835A publication Critical patent/CN116563835A/en
Application granted granted Critical
Publication of CN116563835B publication Critical patent/CN116563835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C3/00Sorting according to destination
    • B07C3/10Apparatus characterised by the means used for detection ofthe destination
    • B07C3/14Apparatus characterised by the means used for detection ofthe destination using light-responsive detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management

Abstract

The present disclosure provides a transfer method, a transfer device, and an electronic device, the transfer method including: collecting a target image, wherein the target image comprises an object to be transported; identifying a target image to obtain attribute information of an object to be transported; determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation; and controlling the transferring device to transfer the object to be transferred according to the moving speed. According to the transfer device, the objects to be transferred with different attribute information can be realized, the transfer device is controlled to transfer by adopting the corresponding moving speed, and the transfer efficiency is improved.

Description

Transfer method, transfer device and electronic device
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a transfer method, a transfer device and an electronic device.
Background
In many scenes, objects need to be transported, for example, in a logistics scene, the transportation device can automatically grab express packages and then transport the express packages to corresponding destinations, so that sorting of the express packages is realized.
In the related art, the same transfer method is adopted by the transfer device for different express packages, and the problem of low transfer efficiency exists.
Disclosure of Invention
Aspects of the present disclosure provide a transfer method, a transfer apparatus, and an electronic apparatus to improve transfer efficiency of an object to be transferred.
A first aspect of an embodiment of the present disclosure provides a transfer method, applied to a transfer device, including: collecting a target image, wherein the target image comprises an object to be transported; identifying a target image to obtain attribute information of an object to be transported; determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation; and controlling the transferring device to transfer the object to be transferred according to the moving speed.
A second aspect of the disclosed embodiments provides a transfer device comprising:
the acquisition module is used for acquiring a target image, wherein the target image comprises an object to be transported;
the identification module is used for identifying the target image and obtaining attribute information of the object to be transported;
the determining module is used for determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation;
and the control module is used for controlling the transfer device to transfer the object to be transferred according to the moving speed.
A third aspect of an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the diversion method of the first aspect when executing the computer program.
A fourth aspect of the disclosed embodiments provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the diversion method of the first aspect when executed by a processor.
A fifth aspect of the disclosed embodiments provides a computer program product comprising: a computer program stored in a readable storage medium, from which the computer program can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the diversion method of the first aspect.
The embodiment of the disclosure is applied to a transfer scene of an object, and a target image is acquired, wherein the target image comprises an object to be transferred; identifying a target image to obtain attribute information of an object to be transported; determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation; the control transfer device transfers the objects to be transferred according to the moving speed, the objects to be transferred with different attribute information can be realized, the control transfer device transfers by adopting the corresponding moving speed, and the transfer efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
fig. 1 is an application scenario diagram of a transfer method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of steps of a diversion method provided in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a target image provided in an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic view of a transfer device provided in an exemplary embodiment of the present disclosure;
FIG. 5 is a flow chart of steps of another method of diversion provided in an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram of an identification model provided by an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of a Focus provided in an exemplary embodiment of the present disclosure;
FIG. 8 is a block diagram of an SPP provided in an exemplary embodiment of the present disclosure;
fig. 9 is a block diagram of a CSP1 provided in an exemplary embodiment of the present disclosure;
fig. 10 is a block diagram of a CSP2 provided by an exemplary embodiment of the present disclosure;
FIG. 11 is a schematic illustration of a mask image provided in an exemplary embodiment of the present disclosure;
Fig. 12 is a block diagram of a transfer device according to an exemplary embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the drawings and specific examples thereof, together with the following description. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
In the process of transferring an object to be transferred from one position to a designated position, the transfer device adopts the same moving speed in the transfer process aiming at different objects to be transferred in the related technology. For example, if the object to be transported a is a large object, the object to be transported B is a small object, and when the object to be transported a adopts a fast moving speed, the object to be transported is thrown out easily in the transportation process, so that the transportation of the object to be transported a cannot be completed. When the object B to be transported adopts a slow moving speed, the transporting time of the object B to be transported can be prolonged. In conclusion, the transfer efficiency of the objects to be transferred can be reduced by transferring different objects to be transferred at the same moving speed.
Based on the above problems, the present disclosure provides a method for transferring an object by collecting a target image including the object to be transferred; identifying a target image to obtain attribute information of an object to be transported; determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation; and controlling the transfer device to transfer the objects to be transferred according to the moving speed, realizing that the objects to be transferred with different attribute information adopt corresponding moving speeds, and improving the transfer efficiency of the objects to be transferred.
In addition, an application scenario of the embodiment of the present disclosure is as shown in fig. 1, where fig. 1 includes: the transfer device comprises a bearing platform P1, a bearing platform P2 and an object D to be transferred, wherein the transfer device can grasp the object D to be transferred on the bearing platform P1 according to a transfer route a or a transfer route b, and the object D to be transferred is transferred to the bearing platform P2. The control device is used for controlling the moving speed of the transferring device in the transferring process so as to improve the transferring efficiency of the objects to be transferred.
Wherein fig. 1 is only an exemplary application scenario, and the embodiments of the present disclosure may be applied in a transportation scenario of any object. The embodiments of the present disclosure are not limited to specific application scenarios.
Fig. 2 is a flowchart illustrating steps of a transferring method according to an exemplary embodiment of the present disclosure. The transferring method is applied to a transferring device and specifically comprises the following steps:
S201, collecting a target image.
Wherein the target image comprises an object to be transported. In the present disclosure, one or more objects to be transported may be included in the target image.
In the present disclosure, a camera is carried on the transfer device, and the camera can collect a target image for an object to be transferred.
For example, referring to fig. 3, an image of an object to be transported placed on a carrying platform is acquired, resulting in a target image P. The target image P comprises a plurality of objects (h, z, q, b, d1, d2, d3, d4 and y) to be transported, wherein 'FY' on the object z to be transported is an identification on the outer package of the object z to be transported.
Furthermore, the object to be transported includes: express package; the categories of objects to be transported include: one of a box wrap, a column wrap, a sphere wrap, a pocket wrap, a sheet wrap, or a profile wrap.
In the present disclosure, a box-like package such as an express package in a carton-like carrier, such as the object h to be transported in fig. 3. The columnar package refers to an express package with a cylindrical or cylindrical-like shape, such as an object b to be transported in fig. 3. Spherical packages refer to express packages with spherical or spheroid shapes, such as the object q to be transported in fig. 3. The baglike package refers to an express package carried by a plastic bag, such as an object d1 to be transported, an object d2 to be transported, an object d3 to be transported and an object d4 to be transported in fig. 3. A sheet package such as an envelope, such as object z to be transported in fig. 3. The special-shaped piece package refers to other types of express packages except box packages, column packages, spherical packages, bag packages and sheet packages, such as an object y to be transported in fig. 3. In addition, the user can set the foreign parcel according to actual needs, for example, the user can classify columnar parcel, spherical parcel as the foreign parcel. The class of objects to be transported includes: one of a box wrap, a bag wrap, a sheet wrap, or a profile wrap.
In the present disclosure, a plurality of categories may be preset, and then a category to which an object to be transported belongs may be determined.
S202, identifying a target image to obtain attribute information of an object to be transported.
In the embodiment of the present disclosure, the attribute information includes: basic attributes, mask objects and integrity, wherein the basic attributes comprise at least one of category, integrity, pose state, color features and texture features.
The degree of integrity can be divided into integrity, half-folding, high-folding and truncation. The object to be transported is completely represented as not being overlaid by other objects to be transported, and can be completely displayed in the target image, such as the object q to be transported, the object d3 to be transported, the object h to be transported, and the object y to be transported in fig. 3. Half-folding means that there is a small part of the object to be transported which is folded by other objects to be transported, such as object b to be transported, object d1 to be transported, and object z to be transported in fig. 3. The height-folding means that there is a greater part of the object to be transported which is folded by other objects to be transported, as the object d4 to be transported in fig. 3. The truncation means that there is a part of the object to be transported that is not displayed in the target image, as in the object to be transported d2 in fig. 3. The demarcation threshold for half-lamination and height-lamination in this disclosure may be set as desired and is not limited herein. For example, half-lamination may be set at a lamination ratio of 50% or less, and high-lamination at a lamination ratio of more than 50%.
Further, the pose state may be classified into a normal pose, a vertical pose, and an inclined pose. The normal pose indicates that the object to be transported is placed on the bearing platform in a normal state, such as the object to be transported d1, the object to be transported d4, the object to be transported q, and the like in fig. 3. The vertical pose represents that the object to be transported is vertically placed on the bearing platform. The inclined pose represents that the object to be transported is obliquely placed on the bearing platform, such as the object to be transported h in fig. 3, wherein one part of the object to be transported h is on the bearing platform, the other part of the object to be transported is overlapped on other objects to be transported, and the object to be transported is in an inclined state relative to the bearing platform. Color characteristics can be categorized into normal color, reflection, and other colors. For example, for a box-like package, the brown color of the carton would be normal, if the box-like package is packed with foam boxes, the color characteristic would be white, if a black pouch is packed, the color characteristic would be black, and if a plastic tape is packed, the color characteristic would include light reflection. Texture features can be categorized as normal, wrinkled, foamed, broken, etc.
In the present disclosure, attribute information of an object to be transported may be obtained by identifying a target image.
S203, determining the moving speed of the transfer device according to the attribute information.
The attribute information and the moving speed have a preset corresponding relation. In the present disclosure, a correspondence relationship between attribute information and a moving speed may be preset. If the attribute information is a category, the corresponding relationship between the category and the moving speed may be preset, as shown in table 1.
TABLE 1
If the attribute information is a category and a pose state, a corresponding relationship between (category, pose state) and the moving speed may be preset, as shown in table 2.
TABLE 2
In the method, corresponding moving speeds can be set for the objects to be transported, which are given with different attribute information, according to actual needs, so that the transport efficiency of the objects to be transported is improved.
Further, referring to fig. 1, an object to be transported may be transported according to a transport route a or a transport route B during transportation. The transit route a is divided into a plurality of routes (e.g., a segment X, a segment Y, and a segment Z). In the present disclosure, the correspondence relationship between attribute information and each route may be preset. For example, if the category is sheet-like wrapping, the moving speed of the segment X is 90% V, the moving speed of the segment Y is V, and the moving speed of the segment Z is 85%.
In the present disclosure, the correspondence between attribute information and movement speed may be preset according to needs, and then, in the transferring process, the movement speed to be adopted by the object to be transferred may be determined according to the identified attribute information of the object to be transferred.
S204, controlling the transferring device to transfer the object to be transferred according to the moving speed.
In the present disclosure, a transfer device may grasp an object to be transferred and then transfer the grasped object to be transferred to a target position. During the transfer, the object to be transferred is moved at the above-determined movement speed.
Illustratively, referring to FIG. 4, there is a transfer device 40 that includes a base 41, a motion segment 42, and an end effector 43. The base 41 is fixed, the end effector 43 moves along with the movement of the movement section 42 to grasp the object D to be transported on the carrying platform P1 shown in fig. 1, and then the end effector 43 can transport the object D to be transported to the carrying platform P2 according to the moving speed.
In addition, the transfer device 40 includes a camera (not shown) for capturing the target image.
Further, if the transfer device is a suction cup device, the transfer device is controlled to transfer the object to be transferred according to the moving speed, including: determining the air flow of the sucker device according to the attribute information, wherein the attribute information and the air flow have a preset corresponding relation; and controlling the sucker device to suck the object to be transported according to the air flow and transporting the sucked object to be transported according to the moving speed.
Wherein the attribute information includes a basic attribute including a category, and wherein the air flow volume corresponding to the sheet package is lower than the air flow volume corresponding to the box package.
In the disclosed embodiment, referring to fig. 4, the end effector 43 may be a suction cup and the transfer device may suction the object to be transferred using the suction cup. Referring to fig. 1, the suction cup moves the object to be transported sucked from the loading platform P1 to the loading platform P2 at a moving speed.
In addition, in the present disclosure, the correspondence relationship between the attribute information and the airflow rate of the object to be transported may be set in advance. The greater the air flow, the stronger the suction force of the sucker, and the sheet-shaped package (such as an envelope) is easy to break when the suction force is greater, so that the air flow corresponding to the sheet-shaped package can be set to be lower. Since the box-like packages are typically rigid box packages, the corresponding air flow is high.
Further, the correspondence between the attribute information and the airflow may include correspondence between the category and the airflow, or may include correspondence between "category, other attribute information" and airflow. The user can preset according to actual needs, for example, if the default air flow is Q, the air flow corresponding to the box-shaped package is set to be 100% Q, the air flow corresponding to the bag-shaped package is set to be 80% Q, and the air flow corresponding to the sheet-shaped package is set to be 60% Q.
In summary, the present disclosure provides for capturing a target image, the target image comprising an object to be transported; identifying a target image to obtain attribute information of an object to be transported; determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation; the control transfer device transfers the objects to be transferred according to the moving speed, the objects to be transferred with different attribute information can be realized, the control transfer device transfers by adopting the corresponding moving speed, and the transfer efficiency is improved.
Fig. 5 is a flowchart of steps of another method of transferring provided in an exemplary embodiment of the present disclosure. The method specifically comprises the following steps:
s501, collecting a target image.
The specific implementation method of this step refers to S201, and will not be described herein.
S502, performing feature extraction processing on the target image through a trunk feature extraction module to obtain a first feature image.
In the present disclosure, referring to fig. 6, identifying the target image includes identifying the target image by the identification model 60; the recognition model 60 includes: a trunk feature extraction module 61, an enhanced feature extraction module 62, and an attribute determination module 63. The recognition model may be a YOLOX (a kind of neural network) model, among other things.
Specifically, the backbone feature extraction module 61 includes Focus (concentrated layer), (CBA/DWConv) 1 (first depth separable convolutional layer), CSP1 1 (first residual network layer 1), (CBA/DWConv) 2 (second depth separable convolutional layer), CSP 12 (second residual network layer 1), (CBA/DWConv) 3 (third depth separable convolutional layer), CSP1 3 (third residual network layer 1), SPP (pooled layer), and CSP1 4 (fourth residual network layer 1).
Therein, referring to fig. 7, focus includes four slices (program slices), concat (splice unit), and CBA (attention mechanism feedback layer). The Slice can extract a value from every other pixel in the target image (3×w×h, representing 3 channels, with length of W and width of H) to obtain four independent feature images, then the Concat and CBA stack the four independent feature images, extend the target image four times from the 3 channels, and the Focus processes the target image to extract features {12× (W/2) × (H/2) } feature images.
Further, referring to fig. 7, the cba includes Conv (convolutional layer), BN (Batch Normalization ), and Act (active layer). Referring to fig. 8, the spp includes three Maxpool (max pooling layer), concat, and CBA. Referring to fig. 9, the csp1 includes: the output of one branch is connected by a Concat and then is processed by one CBA, and the CBA of the other branch is connected by n Bottleneck (Bottleneck layer). The Bottleneck comprises a CBA and a (CBA/DWConv), wherein the input features of the Bottleneck pass through the CBA and the (CBA/DWConv) and then are overlapped with the input features to output a result.
Further, the target image may be processed by the trunk feature extraction module 61 to obtain one first feature image, or may obtain a plurality of first feature images. As shown in fig. 6, the trunk feature extraction module 61 may output the first feature image a1, the first feature image a2, and the first feature image a3, or may output only one of the three first feature images, for example, only one first feature image a3. Specifically, the first feature image a1 is a feature image of a size {192× (W/8) × (H/8) } output through the CSP1 2, the first feature image a2 is a feature image of a size {768× (W/16) × (H/16) } output through the CSP1 3, and the first feature image a3 is a feature image of a size {3072× (W/32) × (H/32) } output through the CSP1 4.
S503, performing feature enhancement processing on the first feature image through an enhancement feature extraction module to obtain a second feature image.
In the present disclosure, referring to fig. 6, the enhanced feature extraction module 62 includes CBA 1 (first attention mechanism feedback layer), FPN (Feature Pyramid Network ), and PAN (Perceptual adversarial network, perceived countermeasure network).
Further, the FPN includes: upsample 1 (first upsampling layer), concat 1 (first stitching unit), CSP2 1 (first residual network layer 2), CBA 2 (second attention mechanism feedback layer), upsample 2 (second upsampling layer), and Concat 2 (second stitching unit). The PAN includes: CSP2 2 (second residual network layer 2), (CBA/DWConv) 4 (fourth depth separable convolutional layer), concat 3 (third splice unit), CSP2 3 (third residual network layer 2), (CBA/DWConv) 5 (fifth depth separable convolutional layer), concat 4 (fourth splice unit), CSP2 4 (fourth residual network layer 2).
Wherein, referring to fig. 10, the csp2 includes: the output of one branch is connected by a Concat and then is processed by one CBA.
In the present disclosure, the second feature image may be one or more, referring to fig. 6, may include 3 second feature images, namely, the second feature image b1, the second feature image b2, and the second feature image b3, and may also include one second feature image, for example, the enhanced feature extraction module 62 outputs only the second feature image b3. Further, the second feature image b1 is output by the CSP2 2, and the size of the second feature image b1 is {192× (W/8) × (H/8) }. The second feature image b2 is output by the CSP2 3, and the size of the second feature image b2 is {768× (W/16) × (H/16) }. The second feature image b3 is output by the CSP2 4, and the size of the second feature image b3 is {3072× (W/32) × (H/32) }.
S504, performing attribute analysis processing on the second characteristic image through an attribute determining module to obtain the basic attribute of the object to be transported.
In the present disclosure, referring to fig. 6, the attribute determination module 63 includes: CBA 3 (third attention mechanism feedback layer), (CBA/DWConv) 6 (sixth depth separable convolutional layer), (CBA/DWConv) 7 (seventh depth separable convolutional layer), conv 1 to Conv 5 (first to fifth convolutional layers).
In the present disclosure, if there are multiple second feature images, the multiple second feature images may be respectively input into the attribute determining module for processing, where the attribute determining module may output probabilities belonging to different degrees of completeness, probabilities in different pose states, probabilities in different categories, probabilities in different color features, and probabilities in different texture features for each second feature image, and then average the multiple second feature images to obtain final degrees of completeness, pose states, categories, color features, and texture features.
For example, referring to table 3, if there is only one second feature image b3, according to the probability corresponding to the second feature image b3, it may be determined that the attribute information of the object to be transported is "half-folded in degree, normal in pose state, box-like package in category, black in color feature, and fold in texture feature". In addition, if there are a plurality of second feature images, such as the second feature image b1, the second feature image b2, and the second feature image b3. And the attribute determining module calculates the average value of the probabilities after processing the second characteristic images respectively, so as to determine that the attribute information of the object to be transported is half-folded in the integrity degree, normal in the pose state, box-shaped package in the category, black in the color characteristic and fold in the texture characteristic.
TABLE 3 Table 3
S505, performing mask analysis processing on the second characteristic image through a mask determining module to obtain a mask image.
Referring to fig. 6, the attribute information further includes: the recognition model further includes a mask determination module 64 for identifying a mask object of the object to be transported in the mask image corresponding to the target image.
In an alternative embodiment, in the feature enhancement processing of the first feature image by the enhancement feature extraction module, a second feature image is obtained; the mask determining module includes: the first convolution unit and the first decoder perform mask analysis processing on the second characteristic image through the mask determining module to obtain a mask image, and the mask image comprises: carrying out convolution processing on the second characteristic image through a first convolution unit to obtain a third characteristic image; and decoding the third characteristic image through a first decoder to obtain a mask image.
Specifically, if the enhanced feature extraction module outputs only one second feature image, the mask determination module includes only one first convolution unit (Conv) and a first decoder. The size of the second feature image corresponds to the convolution kernel size of the first convolution unit. For example, if the size of the second feature image is {192× (W/8) × (H/8), the convolution kernel size of the first convolution unit is 1×1. If the size of the second feature image is {768× (W/16) × (H/16) }, the convolution kernel size of the first convolution unit is 2×2. If the size of the second feature image is {3072× (W/32) × (H/32) }, the convolution kernel size of the first convolution unit is 4×4. Then, the mask image having a size of 3 XW XH is obtained after decoding by the first decoder.
In another alternative embodiment, in the feature enhancement processing of the first feature image by the enhancement feature extraction module, a plurality of second feature images with different sizes are obtained; the mask determination module further includes: the second convolution units, the splicing units and the second decoders are different in convolution kernel size, mask analysis processing is performed on the second feature images through the mask determining module to obtain mask images, and the method comprises the following steps: carrying out convolution processing on the second characteristic images through the second convolution units in a one-to-one correspondence manner to obtain a plurality of fourth characteristic images with the same size; a plurality of fourth characteristic images with the same size are spliced through a splicing unit, so that a fifth characteristic image is obtained; and decoding the fifth characteristic image through a second decoder to obtain a mask image.
Illustratively, referring to fig. 6, the plurality of second feature images different in size are a second feature image b1{192× (W/8) × (H/8), a second feature image b2{768× (W/16) × (H/16) }, and a second feature image b3{3072× (W/32) × (H/32) }, respectively. There are 3 second convolution units corresponding to the second convolution unit (Conv 8), the second convolution unit (Conv 7) and the second convolution unit (Conv 6) in fig. 6, respectively, and the convolution kernel size of the corresponding Conv 8 is 1×1.Conv 7 has a convolution kernel size of 2×2.Conv 6 has a convolution kernel size of 4×4.
Further, after the second feature image is subjected to convolution processing by the second convolution unit, a splicing unit (Concat 5) is adopted to splice, and the spliced feature is decoded by a second Decoder (Decoder) to obtain a mask image, wherein the size of the mask image is 3 xW xH.
Referring to fig. 11, a mask image Y corresponding to the target image P of fig. 3 is shown.
S506, determining a mask object of the object to be transported according to the mask image.
Referring to fig. 3 and 11, mask objects corresponding to the objects to be transferred (h, d4, z, q, d1, b, d2, y, and d 3) are (y 1, y2, y3, y4, y5, y6, y7, y8, and y 9), respectively.
S507, determining the pixel area of the mask object in the mask image.
In the embodiment of the disclosure, the pixel area of each mask object in the mask image may be determined according to the mask image, and the pixel area may represent the actual area size of the object to be transported. The pixel area and the actual area of the object to be transported are positively correlated.
S508, according to the category and the pixel area, the moving speed of the transfer device is obtained.
Wherein, the category, the pixel area and the moving speed have a preset corresponding relation.
For example, referring to table 4, there is a preset correspondence relationship between the category, the pixel area, and the moving speed.
TABLE 4 Table 4
In summary, the present disclosure may preset the correspondence between the category, the pixel area, and the movement speed. After determining the class and pixel area of the object to be transported, the movement speed may be determined according to the correspondence.
S509, determining the placement height of the object to be transported.
Wherein determining the placement height of the object to be transported comprises: acquiring a first pose of a bearing platform, wherein a transfer device is used for transferring an object to be transferred placed on the bearing platform to a target placement position; acquiring a second pose of an object to be transported when the object is on the bearing platform; and determining the placement height according to the first pose and the second pose.
In the present disclosure, the pose of the bearing platform is preset, so the first pose of the bearing platform, such as P1 in fig. 1, can be directly obtained. If the gripping device is as shown in fig. 4, the first pose is the pose of the carrying platform under the coordinate system of the base 41. Further, the pose of the object to be transported in the coordinate system of the base 41 may be determined as the second pose, and then the height H (refer to fig. 1) of the object to be transported may be determined according to the first pose and the second pose, where the height H is the placement height.
S510, determining the position of the upward placement height of the reference placement position as the target placement position.
In the present disclosure, the reference placement position is preset, and the pose of the reference placement position is also known. Referring to fig. 1, the F1 position on the carrying platform P2 is the reference placement position. The position of the reference placement position F1 placed upward by the height H is determined as the target placement position F2.
S511, controlling the transferring device to transfer the object to be transferred to the target placement position according to the moving speed.
The transferring device transfers the object to be transferred to the target placing position, and the object to be transferred can be directly loosened, so that the object to be transferred freely falls to the reference placing position F1. The object to be transported can also be placed at the reference placement position F1 by slowly falling down.
It can be appreciated that determining the target placement position can avoid the object to be transported from falling to the reference placement position too high in the transportation process, and damaging the object to be transported. In addition, the object to be transported can be accurately transported to the reference placement position, and the transportation efficiency is improved.
In the method, the attribute information of the object to be transported can be accurately identified by the identification model, so that the corresponding moving speed of the object to be transported in the transportation process can be accurately determined, and the transportation efficiency of the object to be transported is improved. In addition, through confirming the height of placing of waiting to transport the object, further confirm the target position of placing of waiting to transport the object to realize avoiding waiting to transport the object and damage in the transportation process, further improve the transportation efficiency of waiting to transport the object.
Referring to fig. 12, in addition to providing a transfer method in an embodiment of the present disclosure, there is provided a transfer apparatus 120, the transfer apparatus 120 including:
an acquisition module 121, configured to acquire a target image, where the target image includes an object to be transported;
the identifying module 122 is configured to identify a target image, and obtain attribute information of an object to be transported;
a determining module 123, configured to determine a movement speed of the transfer device according to attribute information, where the attribute information and the movement speed have a preset correspondence;
and the control module 124 is used for controlling the transferring device to transfer the object to be transferred according to the moving speed.
In an alternative embodiment, identifying the target image includes identifying the target image by an identification model; the identification model comprises: the device comprises a trunk feature extraction module, a reinforcement feature extraction module and an attribute determination module; the attribute information includes basic attributes, and the identification module 122 is specifically configured to perform feature extraction processing on the target image through the trunk feature extraction module to obtain a first feature image; performing feature enhancement processing on the first feature image through an enhancement feature extraction module to obtain a second feature image; performing attribute analysis processing on the second characteristic image through an attribute determining module to obtain basic attributes of the object to be transported; the basic attributes comprise categories, and the categories and the moving speed have preset corresponding relations.
In an alternative embodiment, the attribute information further includes: the identifying module 122 is further configured to identify a mask object of the object to be transported in the mask image corresponding to the target image, where the identifying module further includes a mask determining module: performing mask analysis processing on the second characteristic image through a mask determining module to obtain a mask image; and determining a mask object of the object to be transported according to the mask image.
In an alternative embodiment, in the feature enhancement processing of the first feature image by the enhancement feature extraction module, a second feature image is obtained; the mask determining module includes: the recognition module 122 performs mask analysis processing on the second feature image through the mask determination module to obtain a mask image, which is specifically configured to: carrying out convolution processing on the second characteristic image through a first convolution unit to obtain a third characteristic image; and decoding the third characteristic image through a first decoder to obtain a mask image.
In an alternative embodiment, in the feature enhancement processing of the first feature image by the enhancement feature extraction module, a plurality of second feature images with different sizes are obtained; the mask determination module further includes: the second convolution units, the splicing unit and the second decoder have different convolution kernel sizes, and the identification module 122 performs mask analysis processing on the second feature image through the mask determining module to obtain a mask image, which is specifically used for: carrying out convolution processing on the second characteristic images through the second convolution units in a one-to-one correspondence manner to obtain a plurality of fourth characteristic images with the same size; a plurality of fourth characteristic images with the same size are spliced through a splicing unit, so that a fifth characteristic image is obtained; and decoding the fifth characteristic image through a second decoder to obtain a mask image.
In an alternative embodiment, the determining module 123 is specifically configured to: determining the pixel area of a mask object in a mask image; and acquiring the moving speed of the transfer device according to the category and the pixel area, wherein the category, the pixel area and the moving speed have a preset corresponding relation.
In an alternative embodiment, the object to be transported comprises: express package; the categories of objects to be transported include: one of a box wrap, a column wrap, a sphere wrap, a pocket wrap, a sheet wrap, or a profile wrap.
In an alternative embodiment, the basic attributes further include: at least one of integrity, pose status, color features, and texture features.
In an alternative embodiment, the determining module 123 is further configured to determine a placement height of the object to be transported before controlling the transporting device to transport the object to be transported according to the moving speed; acquiring a preset reference placement position; determining the position of the upward placement height of the reference placement position as a target placement position;
the control module 124 is specifically configured to: and controlling the transferring device to transfer the object to be transferred to the target placement position according to the moving speed.
In an alternative embodiment, the determining module 123 is specifically configured to, when determining the placement height of the object to be transported: acquiring a first pose of a bearing platform, wherein a transfer device is used for transferring an object to be transferred placed on the bearing platform to a target placement position; acquiring a second pose of an object to be transported when the object is on the bearing platform; and determining the placement height according to the first pose and the second pose.
According to the transferring method, objects to be transferred with different attribute information can be achieved, the transferring device is controlled to transfer by adopting the corresponding moving speed, and transferring efficiency is improved.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a particular order are included, but it should be clearly understood that the operations may be performed out of order or performed in parallel in the order in which they appear herein, merely for distinguishing between the various operations, and the sequence number itself does not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
Fig. 13 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 13, the electronic device 130 includes: a processor 131, and a memory 132 communicatively coupled to the processor 131, the memory 132 storing computer-executable instructions.
The processor executes the computer-executable instructions stored in the memory to implement the transfer method and/or the transfer method provided in any of the above method embodiments, and specific functions and technical effects that can be implemented are not described herein.
The embodiments of the present disclosure further provide a computer readable storage medium, where computer executable instructions are stored, where the computer executable instructions are used to implement the diversion method and/or the diversion method provided in any of the above-mentioned method embodiments when executed by a processor.
The disclosed embodiments also provide a computer program product comprising: computer program stored in a readable storage medium, from which the computer program can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the diversion method and/or diversion method provided by any of the method embodiments described above.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the partitioning of elements is merely a logical functional partitioning, and there may be additional partitioning in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not implemented. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods of the various embodiments of the disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the system is divided into different functional modules to perform all or part of the functions described above. The specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and will not be described herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. A transfer method applied to a transfer device, comprising:
collecting a target image, wherein the target image comprises an object to be transported;
identifying the target image to obtain attribute information of the object to be transported;
determining the moving speed of the transfer device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation;
and controlling the transferring device to transfer the object to be transferred according to the moving speed.
2. The transfer method according to claim 1, wherein the transfer device is a suction cup device, and the controlling the transfer device to transfer the object to be transferred at the moving speed includes:
determining the air flow of the sucker device according to the attribute information, wherein the attribute information and the air flow have a preset corresponding relationship;
and controlling the sucker device to suck the object to be transported according to the air flow, and transporting the sucked object to be transported according to the moving speed.
3. The method of claim 1 or 2, wherein the identifying the target image comprises identifying the target image by an identification model; the identification model comprises: the device comprises a trunk feature extraction module, a reinforcement feature extraction module and an attribute determination module; the attribute information comprises basic attributes, the identifying the target image to obtain the attribute information of the object to be transported comprises the following steps:
Performing feature extraction processing on the target image through the trunk feature extraction module to obtain a first feature image;
performing feature enhancement processing on the first feature image through the enhancement feature extraction module to obtain a second feature image;
and carrying out attribute analysis processing on the second characteristic image through the attribute determining module to obtain the basic attribute of the object to be transported, wherein the basic attribute comprises a category, and the category and the moving speed have a preset corresponding relation.
4. A method of diversion according to claim 3, wherein the attribute information further comprises: the identification model further comprises a mask determining module, the first characteristic image is subjected to characteristic enhancement processing by the enhancement characteristic extracting module to obtain a second characteristic image, and the identification model further comprises the following steps:
performing mask analysis processing on the second characteristic image through the mask determining module to obtain the mask image;
and determining the mask object of the object to be transported according to the mask image.
5. The transfer method according to claim 4, wherein in the feature enhancement processing of the first feature image by the enhancement feature extraction module, a second feature image is obtained; the mask determining module includes: the mask analysis processing is performed on the second characteristic image through the mask determining module to obtain the mask image, and the mask image comprises the following components:
Performing convolution processing on the second characteristic image through the first convolution unit to obtain a third characteristic image;
and decoding the third characteristic image through the first decoder to obtain the mask image.
6. The transfer method according to claim 4, wherein a plurality of second feature images having different sizes are obtained in the feature enhancement processing of the first feature image by the enhancement feature extraction module; the mask determination module further includes: the second convolution units, the splicing units and the second decoders, the convolution kernels of the second convolution units are different in size, the mask analysis processing is performed on the second characteristic image through the mask determining module, and the mask image is obtained, and the method comprises the following steps:
carrying out convolution processing on the second characteristic images through the second convolution units in a one-to-one correspondence manner to obtain a plurality of fourth characteristic images with the same size;
the fourth characteristic images with the same size are spliced through the splicing unit, and a fifth characteristic image is obtained;
and decoding the fifth characteristic image through the second decoder to obtain the mask image.
7. The method according to claim 4, wherein determining the moving speed of the transferring device according to the attribute information includes:
determining the pixel area of the mask object in the mask image;
and acquiring the moving speed of the transfer device according to the category and the pixel area, wherein the category, the pixel area and the moving speed have a preset corresponding relation.
8. A method of transferring according to claim 3, wherein the object to be transferred comprises: express package; the categories of the objects to be transported comprise: one of a box wrap, a column wrap, a sphere wrap, a pocket wrap, a sheet wrap, or a profile wrap.
9. The method of claim 2, wherein the attribute information includes a base attribute including a category, wherein the sheet package corresponds to an air flow rate that is lower than the box package corresponds to an air flow rate.
10. A method of transportation according to claim 3, wherein the basic attributes further comprise: at least one of integrity, pose status, color features, and texture features.
11. The transfer method according to claim 1 or 2, wherein before the transfer device is controlled to transfer the object to be transferred at the moving speed, further comprising:
Determining the placement height of the object to be transported;
acquiring a preset reference placement position;
determining the position of the reference placement position to the placement height as a target placement position;
the control of the transfer device to transfer the object to be transferred according to the moving speed comprises the following steps:
and controlling the transferring device to transfer the object to be transferred to the target placing position according to the moving speed.
12. The method of claim 11, wherein the determining the placement height of the object to be transported comprises:
acquiring a first pose of a bearing platform, wherein the transferring device is used for transferring the object to be transferred placed on the bearing platform to the target placing position;
acquiring a second pose of an object to be transported when the object is on the bearing platform;
and determining the placement height according to the first pose and the second pose.
13. A transfer apparatus, comprising:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring a target image, and the target image comprises an object to be transported;
the identification module is used for identifying the target image and obtaining attribute information of the object to be transported;
The determining module is used for determining the moving speed of the transferring device according to the attribute information, wherein the attribute information and the moving speed have a preset corresponding relation;
and the control module is used for controlling the transfer device to transfer the object to be transferred according to the moving speed.
14. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the diversion method according to any of claims 1 to 12 when the computer program is executed by the processor.
15. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the diversion method of any of claims 1 to 12.
CN202310533520.8A 2023-05-11 2023-05-11 Transfer method, transfer device and electronic device Active CN116563835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310533520.8A CN116563835B (en) 2023-05-11 2023-05-11 Transfer method, transfer device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310533520.8A CN116563835B (en) 2023-05-11 2023-05-11 Transfer method, transfer device and electronic device

Publications (2)

Publication Number Publication Date
CN116563835A true CN116563835A (en) 2023-08-08
CN116563835B CN116563835B (en) 2024-01-26

Family

ID=87501385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310533520.8A Active CN116563835B (en) 2023-05-11 2023-05-11 Transfer method, transfer device and electronic device

Country Status (1)

Country Link
CN (1) CN116563835B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292141A (en) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 Method and system for target following
CN109513629A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages method, apparatus and computer readable storage medium
WO2021035062A1 (en) * 2019-08-21 2021-02-25 Mujin, Inc. Robotic multi-gripper assemblies and methods for gripping and holding objects
CN113256656A (en) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 Image segmentation method and device
CN114897816A (en) * 2022-05-09 2022-08-12 安徽工业大学 Mask R-CNN mineral particle identification and particle size detection method based on improved Mask
CN115345556A (en) * 2022-08-19 2022-11-15 深圳市研为科技有限公司 Product sorting method and system
CN115965935A (en) * 2022-12-26 2023-04-14 广州沃芽科技有限公司 Object detection method, device, electronic apparatus, storage medium, and program product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108292141A (en) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 Method and system for target following
CN109513629A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages method, apparatus and computer readable storage medium
WO2021035062A1 (en) * 2019-08-21 2021-02-25 Mujin, Inc. Robotic multi-gripper assemblies and methods for gripping and holding objects
CN113256656A (en) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 Image segmentation method and device
CN114897816A (en) * 2022-05-09 2022-08-12 安徽工业大学 Mask R-CNN mineral particle identification and particle size detection method based on improved Mask
CN115345556A (en) * 2022-08-19 2022-11-15 深圳市研为科技有限公司 Product sorting method and system
CN115965935A (en) * 2022-12-26 2023-04-14 广州沃芽科技有限公司 Object detection method, device, electronic apparatus, storage medium, and program product

Also Published As

Publication number Publication date
CN116563835B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
US20200005485A1 (en) Three-dimensional bounding box from two-dimensional image and point cloud data
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
CN109086736A (en) Target Acquisition method, equipment and computer readable storage medium
CN112802105A (en) Object grabbing method and device
US11928594B2 (en) Systems and methods for creating training data
CN107597600A (en) Sorting system and method for sorting
JP2019217608A (en) Information processing device, information processing method and program
CN113351522B (en) Article sorting method, device and system
US11676390B2 (en) Machine-learning model, methods and systems for removal of unwanted people from photographs
CN109513629B (en) Method, device and computer readable storage medium for sorting packages
JP2019181687A (en) Information processing device, information processing method and program
CN110395515B (en) Cargo identification and grabbing method and equipment and storage medium
CN111401215B (en) Multi-class target detection method and system
CN116563835B (en) Transfer method, transfer device and electronic device
CN114820781A (en) Intelligent carrying method, device and system based on machine vision and storage medium
CN111814754A (en) Single-frame image pedestrian detection method and device for night scene
US10860826B2 (en) Information processing apparatus, control method, and program
CN112036400A (en) Method for constructing network for target detection and target detection method and system
CN117037126A (en) Transfer method, transfer device and electronic device
JP6674515B2 (en) Judgment device and judgment method
CN111091550A (en) Multi-size self-adaptive PCB solder paste area detection system and detection method
EP4173731A1 (en) Information processing device, sorting system, and program
CN109146885A (en) Image partition method, equipment and computer readable storage medium
CN112884755B (en) Method and device for detecting contraband
CN113627243B (en) Text recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant