CN110705931B - Cargo grabbing method, device, system, equipment and storage medium - Google Patents

Cargo grabbing method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN110705931B
CN110705931B CN201910846303.8A CN201910846303A CN110705931B CN 110705931 B CN110705931 B CN 110705931B CN 201910846303 A CN201910846303 A CN 201910846303A CN 110705931 B CN110705931 B CN 110705931B
Authority
CN
China
Prior art keywords
goods
grabbed
robot hand
determining
conveyor belt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910846303.8A
Other languages
Chinese (zh)
Other versions
CN110705931A (en
Inventor
唐守殿
陈精华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zebra Laila Logistics Technology Co ltd
Original Assignee
Shanghai Zebra Laila Logistics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zebra Laila Logistics Technology Co ltd filed Critical Shanghai Zebra Laila Logistics Technology Co ltd
Priority to CN201910846303.8A priority Critical patent/CN110705931B/en
Publication of CN110705931A publication Critical patent/CN110705931A/en
Application granted granted Critical
Publication of CN110705931B publication Critical patent/CN110705931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C3/00Sorting according to destination
    • B07C3/02Apparatus characterised by the means used for distribution
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C3/00Sorting according to destination
    • B07C3/02Apparatus characterised by the means used for distribution
    • B07C3/08Apparatus characterised by the means used for distribution using arrangements of conveyors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a cargo grabbing method, a cargo grabbing device, a cargo grabbing system, cargo grabbing equipment and a storage medium, wherein the cargo grabbing method comprises the following steps: collecting an original image through a camera; extracting edge information of the goods to be grabbed and the robot hand from the original image; determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand; determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt; and sending a grabbing instruction to the robot hand according to the moving state of the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed. Adopt the scheme in this application, can make quick, the accurate goods that snatchs of robot to commodity circulation letter sorting efficiency has been improved greatly.

Description

Cargo grabbing method, device, system, equipment and storage medium
Technical Field
The present application relates to information processing technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for capturing goods.
Background
At present, the logistics industry reaches the well blowout development period, each logistics company needs to deal with a large number of packages every day, and competition among different logistics companies is increased day by day, so that timeliness and accuracy are targets pursued by express companies and broad clients. Among them, the speed of sorting the logistics becomes the most important limiting factor for improving the timeliness of each large logistics company.
However, the existing logistics sorting system usually mainly adopts manpower, and the automatic sorting equipment is simple, so that the sorting efficiency is low; also some commodity circulation letter sorting system adopts the machine to realize the goods letter sorting, for example utilizes the manipulator to snatch the goods on the conveyer belt, but because the goods size on the conveyer belt is different, the shape is different, arranges also differently, leads to the grabbing success rate of manipulator to be lower.
Disclosure of Invention
The embodiment of the application provides a cargo grabbing method, a cargo grabbing device, a cargo grabbing system, cargo grabbing equipment and a storage medium, so as to solve the technical problem.
According to a first aspect of embodiments of the present application, there is provided a cargo grasping method including:
collecting an original image through a camera; the original image comprises goods to be grabbed and a robot hand, wherein the goods to be grabbed are placed on a conveyor belt, and the robot hand is positioned above the conveyor belt;
extracting edge information of the goods to be grabbed and the robot hand from the original image;
determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand;
determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt;
and sending a grabbing instruction to the robot hand according to the moving state of the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed.
According to a second aspect of embodiments of the present application, there is provided a cargo gripping device comprising:
the image acquisition unit is used for acquiring an original image through a camera; the original image comprises goods to be grabbed and a robot hand, wherein the goods to be grabbed are placed on a conveyor belt, and the robot hand is positioned above the conveyor belt;
an edge extraction unit, configured to extract edge information of the goods to be grabbed and the robot hand from the original image;
the position determining unit is used for determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand;
the moving state determining unit is used for determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt;
and the instruction sending unit is used for sending a grabbing instruction to the robot hand according to the moving state of the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed.
According to a third aspect of the embodiments of the present application, there is provided a cargo gripping system, including a camera, a processing device, a conveyor belt for conveying a cargo, and a robot hand for gripping the cargo, the robot hand being located above the conveyor belt; wherein:
the camera is used for collecting an original image; the original image comprises goods to be grabbed and the robot hand which are positioned on the conveyor belt;
the processing device is used for extracting the edge information of the goods to be grabbed and the robot arm from the original image; determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand; determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt; sending a grabbing instruction to the robot hand according to the moving state of the robot hand;
and the robot hand is used for moving to the goods to be grabbed according to the grabbing instruction and grabbing the goods to be grabbed.
According to a fourth aspect of embodiments herein, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above.
According to a fifth aspect of embodiments of the present application, there is provided an electronic device, comprising a memory for storing one or more programs, and one or more processors; the one or more programs, when executed by the one or more processors, implement the methods as described above.
Adopt the goods that this application embodiment provided to snatch the scheme, gather original image through the camera, contain the goods of waiting to snatch on the conveyer belt and the manipulator that is located the conveyer belt top in this original image, the marginal information of waiting to snatch goods and manipulator is drawed from original image, in order to confirm the position information of waiting to snatch goods and manipulator, according to waiting to snatch the position information of goods and manipulator and the functioning speed of conveyer belt, confirm the mobile state of manipulator, and send the instruction of grabbing to the manipulator according to this mobile state, make the manipulator can be fast, the accurate goods of waiting to snatch of snatching, thereby logistics letter sorting efficiency has been improved greatly.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a cargo grabbing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a cargo gripping device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a cargo grabbing system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the process of implementing the present application, the inventors found that:
the existing logistics sorting system usually mainly adopts manpower, and the automatic sorting equipment is simpler, so that the sorting efficiency is lower; also some commodity circulation letter sorting system adopts the machine to realize the goods letter sorting, for example utilizes the manipulator to snatch the goods on the conveyer belt, but because the goods size on the conveyer belt is different, the shape is different, arranges also differently, leads to the grabbing success rate of manipulator to be lower.
In view of the above problems, the embodiment of the present application provides a cargo grabbing scheme, an original image is collected through a camera, the original image includes a cargo to be grabbed placed on a conveyor belt and a robot hand located above the conveyor belt, edge information of the cargo to be grabbed and the robot hand is extracted from the original image to determine position information of the cargo to be grabbed and the robot hand, a moving state of the robot hand is determined according to the position information of the cargo to be grabbed and the robot hand and a running speed of the conveyor belt, and a grabbing instruction is sent to the robot hand according to the moving state, so that the robot hand can fast and accurately grab the cargo to be grabbed, and logistics sorting efficiency is greatly improved.
The solution in the embodiment of the present application may be implemented by using various computer languages, for example, object-oriented programming language Java and transliteration scripting language JavaScript, etc.
In order to make the technical solutions and advantages in the embodiments of the present application more clearly understood, the following description of the exemplary embodiments of the present application with reference to the accompanying drawings is made in further detail, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all the embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a schematic flow chart of a cargo grabbing method according to an embodiment of the present disclosure. The method comprises the following steps:
and step 11, acquiring an original image through a camera, wherein the original image comprises goods to be grabbed placed on the conveyor belt and a robot hand positioned above the conveyor belt.
In the embodiment of the application, the goods are placed on the conveyor belt and transported by the conveyor belt, and the robot is used for grabbing the goods from the conveyor belt. Generally, the goods to be gripped are goods placed on a conveyor belt and arranged at the forefront.
And 12, extracting the edge information of the goods to be grabbed and the robot arm from the original image.
The original image contains, in addition to the goods to be gripped and the robot arm, an image of the conveyor belt. In order to extract the edge information of the goods to be grabbed and the robot arm from the original image, the conveyor belt does not affect the edge information, in this embodiment of the present application, the original image needs to be preprocessed to remove the color of the conveyor belt in the original image, so step 12 may be implemented as follows:
firstly, carrying out graying processing on an original image according to the color of a conveyor belt in the original image to obtain a grayscale image, wherein the color of the conveyor belt in the original image is removed, then carrying out edge detection on the grayscale image to obtain edge information of goods to be grabbed and a manipulator, and taking the obtained image as an edge image.
And step 13, determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand.
Step 13 includes two processes: determining the position information of the robot hand in the original image and determining the position information of the goods to be grabbed in the original image. These two processes are described separately below.
1. And determining the position information of the robot hand in the original image.
After the edge image containing the edge information of the goods to be grabbed and the robot hand is obtained, because the shape of the robot hand is known, the obtained edge image can be directly subjected to image recognition through an image recognition algorithm, so that the position information of the robot hand is obtained.
In this embodiment, the position information of the robot hand may be the bottom center coordinates of the robot hand. The reason why the bottom center coordinates of the robot hand are determined as the position information of the robot hand in the original image is that the vertical distance determined using the bottom center coordinates is most accurate when subsequently determining the vertical distance between the robot hand and the goods to be grabbed.
It should be noted that, in the embodiment of the present application, the image recognition algorithm is not limited, as long as the edge information of the robot in the edge image can be recognized, for example, openCV.
2. And determining the position information of the goods to be grabbed in the original image.
Because the goods on the conveyor belt are different in size, shape and arrangement, the goods are not suitable to be identified by using an image identification algorithm. In the embodiment of the application, goods arranged at the forefront on the conveyor belt are recognized in a scanning mode based on the goods to be grabbed, and then the position of the goods is determined.
Specifically, when the conveyor belt is in a non-vertical state in the original image, firstly, scanning the edge image line by line from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex;
for example, when the conveyer belt is conveyed from left to right, the conveying direction is right, the opposite direction of the conveying is left, that is, the edge image is scanned column by column from right to left, and the first non-empty point scanned is the rightmost point of the edge line of the goods to be grabbed.
Then, continuously carrying out horizontal scanning along the first vertex in the opposite direction of conveying to the conveyor belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex;
still taking the example that the conveyor belt conveys from left to right, after the first vertex is determined, the horizontal scanning continues to be performed to left, and the second non-empty point scanned at this time is: a leftmost point horizontally corresponding to a rightmost point of an edge line of the cargo to be grabbed.
And finally, determining the coordinates of the central point of the goods to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
When the conveyor belt is in a vertical state in the original image, when the position information of the goods to be grabbed in the original image is determined in step 13, the method may specifically include:
firstly, scanning an edge image line by line from the conveying direction of a conveying belt to the opposite conveying direction of the conveying belt, determining a first scanned non-empty point as a first vertex of a cargo to be grabbed, and determining a coordinate of the first vertex;
for example, when the conveyor belt is conveying from top to bottom, the conveying direction is downward, the opposite direction of the conveying is upward, that is, scanning the edge image line by line from bottom to top, and the first non-empty point scanned is the rightmost point of the edge line of the goods to be grabbed.
Then, continuously carrying out vertical scanning along the first vertex in the opposite direction of conveying to the conveyor belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinate of the second vertex;
still taking the example of the conveyor belt conveying from top to bottom, after the first vertex is determined, the vertical scanning is continued to be performed upwards along the first vertex, and the second non-empty point scanned at this time is: an uppermost point vertically corresponding to a lowermost point of an edge line of the cargo to be grabbed.
And finally, determining the coordinates of the center point of the goods to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
The placement of the conveyor belt in the original image is not limited in the embodiments of the present application.
And step 14, determining the moving state of the robot hand according to the position information of the goods to be grabbed and the robot hand in the original image and the running speed of the conveyor belt.
In the embodiment of the application, after the position information of the goods to be grabbed and the robot hand in the original image is determined, the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand can be determined.
Taking the position information of the goods to be grabbed as the center point coordinates (x 1, y 1), the position information of the robot hand as the bottom center coordinates (x 2, y 2), for example, the horizontal distance between the two is | x2-x1 | and the vertical distance is | y2-y1 |.
After the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand are determined, the moving state of the robot hand can be determined by combining the running speed of the conveyor belt.
Specifically, the movement state of the robot hand may include a horizontal movement state and a vertical movement state.
The horizontal movement state of the robot hand may be determined according to the following formula:
Figure BDA0002195341710000071
wherein v is the running speed of the conveyor belt; x is the horizontal distance between the goods to be grabbed and the robot arm; a is 1 Is the horizontal acceleration of the robot hand; t is the acceleration moving time and the deceleration moving time of the robot hand.
The vertical movement state of the robot hand can be determined according to the following formula:
Figure BDA0002195341710000072
wherein y is the vertical distance between the goods to be grabbed and the robot arm, a 2 Is the vertical acceleration of the robot hand.
According to the two formulas, the robot hand in the embodiment of the application starts to move in an accelerating manner from zero speed and then moves in a decelerating manner until the speed is zero no matter in the horizontal direction or the vertical direction, and directly grabs the goods to be grabbed. The acceleration during the acceleration movement and the deceleration movement may be the same or different, and the acceleration time and the deceleration time may be the same or different.
In the embodiment of the application, the acceleration of the robot in the acceleration movement and the deceleration movement is defined to be the same, and the acceleration movement time and the deceleration movement time are also defined to be the same, so that the acceleration of the robot in the horizontal acceleration stage is a 1 Acceleration during the horizontal deceleration phase is-a 1 Acceleration of the vertical acceleration phase is a 2 Acceleration during the vertical deceleration phase is-a 2 The total time of the robot hand movement is 2t.
It should be noted that the three parameters a involved in the above two formulas 1 、a 2 And t, only one parameter is known, and the other two parameters can be determined. The known parameters are not limited in the embodiments of the present application.
And step 15, sending a grabbing instruction to the robot hand according to the moving state of the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed.
Adopt the goods that this application embodiment provided to snatch the scheme, gather original image through the camera, contain the goods of waiting to snatch on the conveyer belt and the manipulator that is located the conveyer belt top in this original image, the marginal information of waiting to snatch goods and manipulator is drawed from original image, in order to confirm the position information of waiting to snatch goods and manipulator, according to waiting to snatch the position information of goods and manipulator and the functioning speed of conveyer belt, confirm the mobile state of manipulator, and send the instruction of grabbing to the manipulator according to this mobile state, make the manipulator can be fast, the accurate goods of waiting to snatch of snatching, thereby logistics letter sorting efficiency has been improved greatly.
Based on the same inventive concept, the embodiment of the application also provides a cargo grabbing device, the principle of solving the problems of each device of the cargo grabbing device is similar to that of the cargo grabbing method, and repeated parts are not repeated.
Fig. 2 shows a schematic structural diagram of a cargo grabbing device provided by the embodiment of the application.
As shown, the cargo gripping apparatus comprises:
an image acquisition unit 21 for acquiring an original image by a camera; the original image comprises goods to be grabbed and a robot arm, wherein the goods to be grabbed are placed on a conveyor belt, and the robot arm is positioned above the conveyor belt;
an edge extracting unit 22 configured to extract edge information of the goods to be grabbed and the robot hand from the original image;
the position determining unit 23 is configured to determine position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand;
a moving state determining unit 24, configured to determine a moving state of the robot according to the position information of the goods to be grabbed and the robot in the original image, and the running speed of the conveyor belt;
and the instruction sending unit 25 is configured to send a grabbing instruction to the robot arm according to the moving state of the robot arm, so that the robot arm moves to the goods to be grabbed and grabs the goods to be grabbed.
Optionally, the edge extracting unit 22 includes:
the graying processing module is used for performing graying processing on the original image to obtain a grayscale image;
and the edge detection module is used for carrying out edge detection on the gray level image to obtain an edge image, and the edge image comprises the goods to be grabbed and the edge information of the robot hand.
Further, the position determination unit 23 includes:
the first scanning module is used for scanning the edge images row by row from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt when the conveyor belt is in a non-vertical state in the original image, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; horizontally scanning along the first vertex in the opposite direction of conveying the conveyor belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex;
the second scanning module is used for scanning the edge image line by line from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt when the conveyor belt is in a vertical state in the original image, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; vertically scanning along the first vertex in the opposite direction of conveying to the conveying belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex;
and the cargo coordinate determination module is used for determining the coordinates of the central point of the cargo to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
Further, the position determining unit 23 further includes:
and the robot hand coordinate determination module is used for carrying out image recognition on the edge image by utilizing an image recognition algorithm and determining the bottom center coordinate of the robot hand.
Optionally, the moving state determining unit 24 includes:
the distance determining module is used for determining the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand according to the position information of the goods to be grabbed and the position information of the robot hand in the original image;
and the moving state determining module is used for determining the moving state of the robot hand according to the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand and the running speed of the conveyor belt.
Further, the moving state determining module specifically includes:
according to the horizontal distance between the goods to be grabbed and the robot hand, determining the horizontal moving state of the robot hand according to the following formula:
Figure BDA0002195341710000101
wherein v is the running speed of the conveyor belt, x is the horizontal distance between the goods to be grabbed and the robot arm, and a 1 The t is the acceleration moving time and the deceleration moving time of the robot hand;
according to the vertical distance between the goods to be grabbed and the robot hand, determining the vertical movement state of the robot hand according to the following formula:
Figure BDA0002195341710000102
wherein y is a vertical distance between the goods to be grabbed and the robot arm, and a 2 Is the acceleration of the robot hand.
Adopt the goods grabbing device that provides in this application embodiment, can realize that the robot hand is quick, accurate snatchs and treat and snatch the goods to commodity circulation letter sorting efficiency has been improved greatly.
Based on the same inventive concept, the embodiment of the application also provides a cargo grabbing system, the principle of solving the problems of each device of the system is similar to that of the cargo grabbing method, and repeated parts are not repeated.
Fig. 3 shows a schematic structural diagram of a cargo gripping system according to an embodiment of the present application.
As shown, the cargo gripping system comprises: the device comprises a camera 31, a processing device 32, a conveyor belt 34 for conveying goods and a robot arm 33 for grabbing the goods, wherein the robot arm 33 is positioned above the conveyor belt 34; wherein:
the camera 31 is used for collecting an original image; the original image contains the goods to be grabbed on the conveyor belt 34 and the robot arm 33;
the processing device 32 is configured to extract edge information of the goods to be grabbed and the robot arm 33 from the original image; determining the position information of the goods to be grabbed and the robot hand 33 in the original image according to the edge information of the goods to be grabbed and the robot hand 33; determining the moving state of the robot hand 33 according to the goods to be grabbed and the position information of the robot hand 33 in the original image and the running speed of the conveyor belt 34; sending a grasping instruction to the robot hand 33 according to the moving state of the robot hand 33;
and the robot hand 33 is configured to move to the goods to be grabbed according to the grabbing instruction and grab the goods to be grabbed.
Adopt the goods grasping system that provides in this application embodiment, can realize that the machine hand is quick, accurate snatchs and treat to snatch the goods to commodity circulation letter sorting efficiency has been improved greatly.
Based on the same inventive concept, embodiments of the present application further provide a computer storage medium, which is described below.
The computer storage medium has a computer program stored thereon, which, when being executed by a processor, carries out the steps of the method according to an embodiment.
The computer storage medium that provides in the embodiment of this application can realize that the machine hand is quick, accurate snatchs and treat the snatch goods to commodity circulation letter sorting efficiency has been improved greatly.
Based on the same inventive concept, the embodiment of the present application further provides an electronic device, which is described below.
Fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
As shown, the electronic device includes memory 401 for storing one or more programs, and one or more processors 402; the one or more programs, when executed by the one or more processors, implement a method as in embodiment one.
The electronic equipment that provides in this application embodiment can realize that the robot is quick, accurate snatchs and treat to snatch the goods to commodity circulation letter sorting efficiency has been improved greatly.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. A method of capturing cargo, comprising:
collecting an original image through a camera; the original image comprises goods to be grabbed and a robot hand, wherein the goods to be grabbed are placed on a conveyor belt, and the robot hand is positioned above the conveyor belt;
extracting edge information of the goods to be grabbed and the robot hand from the original image;
determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand;
determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt;
according to the moving state of the robot hand, a grabbing instruction is sent to the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed;
extracting edge information of the goods to be grabbed and the robot arm from the original image, including:
carrying out graying processing on the original image to obtain a grayscale image;
performing edge detection on the gray level image to obtain an edge image, wherein the edge image comprises edge information of the goods to be grabbed and the manipulator;
determining the position information of the goods to be grabbed in the original image according to the goods to be grabbed and the edge information of the robot hand, wherein the determining comprises the following steps:
when the conveyor belt is in a non-vertical state in the original image, scanning the edge image column by column from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; horizontally scanning along the first vertex in the opposite direction of conveying the conveyor belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex;
when the conveyor belt is in a vertical state in the original image, scanning the edge image line by line from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-null point as a first vertex of the goods to be grabbed, and determining a coordinate of the first vertex; vertically scanning along the first vertex in the opposite direction of conveying to the conveying belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex;
and determining the coordinates of the center point of the goods to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
2. The method of claim 1, wherein determining the position information of the robot hand in the original image according to the goods to be grabbed and the edge information of the robot hand comprises:
and carrying out image recognition on the edge image by using an image recognition algorithm, and determining the bottom center coordinate of the robot hand.
3. The method of claim 1, wherein determining the moving state of the robot hand according to the position information of the goods to be grabbed and the robot hand in the original image and the running speed of the conveyor belt comprises:
determining the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand according to the position information of the goods to be grabbed and the robot hand in the original image;
and determining the moving state of the robot hand according to the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand and the running speed of the conveyor belt.
4. The method of claim 3, wherein determining the moving state of the robot hand according to the horizontal distance and the vertical distance between the goods to be grabbed and the robot hand and the conveying speed of the conveyor belt comprises:
according to the horizontal distance between the goods to be grabbed and the robot hand, determining the horizontal moving state of the robot hand according to the following formula:
Figure FDA0003612434180000021
wherein v is the running speed of the conveyor belt, x is the horizontal distance between the goods to be grabbed and the robot arm, and a 1 The t is the acceleration moving time and the deceleration moving time of the robot hand;
according to the vertical distance between the goods to be grabbed and the robot hand, determining the vertical movement state of the robot hand according to the following formula:
Figure FDA0003612434180000031
wherein y is a vertical distance between the goods to be grabbed and the robot arm, and a 2 Is the acceleration of the robot hand.
5. A cargo gripping device, comprising:
the image acquisition unit is used for acquiring an original image through a camera; the original image comprises goods to be grabbed and a robot hand, wherein the goods to be grabbed are placed on a conveyor belt, and the robot hand is positioned above the conveyor belt;
an edge extraction unit, configured to extract edge information of the goods to be grabbed and the robot hand from the original image;
the position determining unit is used for determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand;
the moving state determining unit is used for determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt;
the instruction sending unit is used for sending a grabbing instruction to the robot hand according to the moving state of the robot hand, so that the robot hand moves to the goods to be grabbed and grabs the goods to be grabbed;
the edge extraction unit is specifically configured to:
carrying out graying processing on the original image to obtain a grayscale image; performing edge detection on the gray level image to obtain an edge image, wherein the edge image comprises edge information of the goods to be grabbed and the robot hand;
the position determining unit is specifically configured to:
when the conveyor belt is in a non-vertical state in the original image, scanning the edge image column by column from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; horizontally scanning along the first vertex in the opposite direction of conveying the conveyor belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex; when the conveyor belt is in a vertical state in the original image, scanning the edge image line by line from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-null point as a first vertex of the goods to be grabbed, and determining a coordinate of the first vertex; vertically scanning along the first vertex in the opposite direction of conveying to the conveying belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex; and determining the coordinates of the center point of the goods to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
6. The goods grabbing system is characterized by comprising a camera, a processing device, a conveyor belt for conveying goods and a robot arm for grabbing the goods, wherein the robot arm is positioned above the conveyor belt; wherein:
the camera is used for collecting an original image; the original image comprises goods to be grabbed and the robot hand which are positioned on the conveyor belt;
the processing device is used for extracting the edge information of the goods to be grabbed and the robot arm from the original image; determining the position information of the goods to be grabbed and the robot hand in the original image according to the edge information of the goods to be grabbed and the robot hand; determining the moving state of the robot hand according to the goods to be grabbed and the position information of the robot hand in the original image and the running speed of the conveyor belt; sending a grabbing instruction to the robot hand according to the moving state of the robot hand;
the robot hand is used for moving to the goods to be grabbed according to the grabbing instruction and grabbing the goods to be grabbed;
the processing device is specifically configured to:
carrying out graying processing on the original image to obtain a grayscale image; performing edge detection on the gray level image to obtain an edge image, wherein the edge image comprises edge information of the goods to be grabbed and the manipulator;
when the conveyor belt is in a non-vertical state in the original image, scanning the edge image column by column from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; horizontally scanning along the first vertex in the opposite direction of conveying to the conveyor belt, determining a first non-empty point obtained by scanning as a second vertex of the goods to be grabbed, and determining coordinates of the second vertex; when the conveyor belt is in a vertical state in the original image, scanning the edge image line by line from the conveying direction of the conveyor belt to the opposite conveying direction of the conveyor belt, determining a first scanned non-empty point as a first vertex of the goods to be grabbed, and determining the coordinate of the first vertex; vertically scanning along the first vertex in the opposite direction of conveying to the conveying belt, determining a first scanned non-empty point as a second vertex of the goods to be grabbed, and determining the coordinates of the second vertex; and determining the coordinates of the central point of the goods to be grabbed according to the coordinates of the first vertex and the coordinates of the second vertex.
7. A computer storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
8. An electronic device comprising one or more processors and memory, the memory configured to store one or more programs; the one or more programs, when executed by the one or more processors, implement the method of any of claims 1-4.
CN201910846303.8A 2019-09-09 2019-09-09 Cargo grabbing method, device, system, equipment and storage medium Active CN110705931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846303.8A CN110705931B (en) 2019-09-09 2019-09-09 Cargo grabbing method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846303.8A CN110705931B (en) 2019-09-09 2019-09-09 Cargo grabbing method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110705931A CN110705931A (en) 2020-01-17
CN110705931B true CN110705931B (en) 2022-11-15

Family

ID=69195141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846303.8A Active CN110705931B (en) 2019-09-09 2019-09-09 Cargo grabbing method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110705931B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111086888B (en) * 2020-01-19 2021-04-06 浙江工业大学 Automatic stacking method and system of manipulator
CN113443404B (en) * 2020-03-26 2023-06-09 顺丰科技有限公司 Goods sorting method, goods sorting system and device
WO2022241597A1 (en) * 2021-05-17 2022-11-24 海南师范大学 Ai intelligent garbage identification and classification system and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6471044B1 (en) * 1999-04-30 2002-10-29 Siemens Electrocom, L.P. Hold and release singulator
CN105728328A (en) * 2016-05-13 2016-07-06 杭州亚美利嘉科技有限公司 Goods sorting system and method
CN106965180A (en) * 2017-04-13 2017-07-21 北京理工大学 The mechanical arm grabbing device and method of bottle on streamline
CN108712946B (en) * 2017-08-23 2021-11-09 深圳蓝胖子机器智能有限公司 Goods placement method, device and system, electronic equipment and readable storage medium
CN109719042B (en) * 2017-10-27 2022-04-12 北京京东振世信息技术有限公司 Package supply method and device, electronic equipment and storage medium
CN111344073B (en) * 2018-05-16 2022-01-25 深圳蓝胖子机器人有限公司 Buffer memory device, goods sorting device and goods sorting system

Also Published As

Publication number Publication date
CN110705931A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110705931B (en) Cargo grabbing method, device, system, equipment and storage medium
CN107597600A (en) Sorting system and method for sorting
CN109092696B (en) Sorting system and sorting method
EP2915596B1 (en) Delivery sorting processing system
CN113351522B (en) Article sorting method, device and system
CN113420746B (en) Robot visual sorting method and device, electronic equipment and storage medium
CN112318485B (en) Object sorting system and image processing method and device thereof
CN109584298B (en) Robot-oriented autonomous object picking task online self-learning method
CN112802105A (en) Object grabbing method and device
CN110711718A (en) Express package intelligent sorting system and method based on X-ray image and storage medium
CN111597857B (en) Logistics package detection method, device, equipment and readable storage medium
CN110633738B (en) Rapid classification method for industrial part images
CN114694064B (en) Graph cutting method and system based on target recognition
CN114241037A (en) Mixed size unloading disc
CN114820781A (en) Intelligent carrying method, device and system based on machine vision and storage medium
CN113610833A (en) Material grabbing method and device, electronic equipment and storage medium
CN114842323B (en) Intelligent robot sorting optimization method based on classification recognition
JP6643921B2 (en) Sorting device and article removal method
CN115393696A (en) Object bin picking with rotation compensation
CN114435828A (en) Goods storage method and device, carrying equipment and storage medium
CN113554706B (en) Trolley parcel position detection method based on deep learning
WO2024067006A1 (en) Disordered wire sorting method, apparatus, and system
CN111079575B (en) Material identification method and system based on package image characteristics
CN112338898A (en) Image processing method and device of object sorting system and object sorting system
CN114332220A (en) Automatic parcel sorting method based on three-dimensional vision and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210317

Address after: 200333 room 3110, No. 100, Lane 130, Taopu Road, Putuo District, Shanghai

Applicant after: Shanghai zebra Laila Logistics Technology Co.,Ltd.

Address before: Room 308-1, area C, 1718 Daduhe Road, Putuo District, Shanghai 200333

Applicant before: Shanghai kjing XinDa science and Technology Group Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant