WO2022156593A1 - 目标物检测方法、装置、电子设备、存储介质和程序 - Google Patents
目标物检测方法、装置、电子设备、存储介质和程序 Download PDFInfo
- Publication number
- WO2022156593A1 WO2022156593A1 PCT/CN2022/071867 CN2022071867W WO2022156593A1 WO 2022156593 A1 WO2022156593 A1 WO 2022156593A1 CN 2022071867 W CN2022071867 W CN 2022071867W WO 2022156593 A1 WO2022156593 A1 WO 2022156593A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- turnover box
- area
- target object
- determined
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 108
- 230000007306 turnover Effects 0.000 claims abstract description 203
- 238000000034 method Methods 0.000 claims abstract description 67
- 238000004590 computer program Methods 0.000 claims description 22
- 238000002372 labelling Methods 0.000 claims description 14
- 230000008030 elimination Effects 0.000 claims description 11
- 238000003379 elimination reaction Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 6
- 238000005192 partition Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000010485 coping Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 238000013316 zoning Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/36—Sorting apparatus characterised by the means used for distribution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C2501/00—Sorting according to a characteristic or feature of the articles or material to be sorted
- B07C2501/0063—Using robots
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- the present application relates to target detection technology, and relates to, but is not limited to, a target detection method, apparatus, electronic device, computer storage medium and computer program.
- a detection method based on a photoelectric sensor, a target detection method based on vision, or a detection method based on a motion model can be used to estimate the landing point of the object.
- the embodiments of the present application are expected to provide a target object detection method, apparatus, electronic device, computer storage medium, and computer.
- the embodiment of the present application provides a target detection method, the method includes:
- the target object is tracked to obtain the real-time position information of the target object
- the area where the target falls is determined according to the current position information of the target and the regional position information.
- the area location information includes areas where various types of turnover boxes are located.
- the method further includes:
- the type of each turnover box is determined; according to the type of each turnover box, the type label information of each turnover box is obtained; according to each turnover box The type labeling information of the turnover box is used to obtain the location information of the area.
- the attributes of each turnover box include at least one of the following: color, texture, shape, and size.
- the target tracking of the target includes:
- the target object is photographed multiple times by using the image acquisition device to obtain multiple photographed images; according to each photographed image, target tracking is performed on the target object.
- performing target tracking on the target according to each captured image includes:
- the method further includes:
- the position of the target's falling point is determined according to the relative positional relationship between the image acquisition device and the robotic arm and the posture information of the end of the robotic arm.
- the method further includes:
- the initial position of the target is determined according to the pose information of the end of the robot arm; the height of the target is determined according to the initial position of the target.
- the target tracking of the target includes:
- target tracking is performed on the target to obtain real-time position information of the target.
- the method further includes:
- the target turnover box it is determined that the target object is successfully selected
- the area where the target object is located is a source turnover box or another area, it is determined that the target object selection fails, and the other area is an area other than the destination turnover box and the source turnover box.
- the embodiment of the present application also provides a target object detection device, the device includes:
- the first processing module is configured to perform target tracking on the target when it is determined that the robot arm picks up the target from any turnover box, and the height of the target is greater than the height of the any turnover box, to obtain the Real-time location information of the target;
- the second processing module is configured to determine, according to the real-time position information of the target, that when the target falls from the robotic arm, determine the target according to the current position information and regional position information of the target.
- the area where the drop point is located, and the area location information includes the area where various types of turnover boxes are located.
- the second processing module is further configured to:
- the type of each turnover box is determined; according to the type of each turnover box, the type label information of each turnover box is obtained; according to each turnover box The type labeling information of the turnover box is used to obtain the location information of the area.
- the attributes of each turnover box include at least one of the following: color, texture, shape, and size.
- the first processing module configured to track the target object, includes:
- the target object is photographed multiple times by using the image acquisition device to obtain multiple photographed images; according to each photographed image, target tracking is performed on the target object.
- the first processing module is configured to perform target tracking on the target according to each captured image, including:
- the second processing module is further configured to:
- the position of the target's falling point is determined according to the relative positional relationship between the image acquisition device and the robotic arm and the posture information of the end of the robotic arm.
- the first processing module is further configured to:
- the initial position of the target is determined according to the pose information of the end of the robot arm; the height of the target is determined according to the initial position of the target.
- the first processing module configured to track the target object, includes:
- target tracking is performed on the target to obtain real-time position information of the target.
- the second processing module is further configured to:
- the target turnover box it is determined that the target object is successfully selected
- the area where the target object is located is a source turnover box or another area, it is determined that the target object selection fails, and the other area is an area other than the destination turnover box and the source turnover box.
- An embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements any one of the above target object detections when the processor executes the program method.
- the embodiment of the present application further provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, any one of the above target object detection methods is implemented.
- Embodiments of the present application further provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the above-mentioned embodiments of the present application.
- the computer program product may be a software installation package.
- FIG. 1A is a schematic diagram 1 of detecting the falling area of a target object by using a grating sensor in an embodiment of the present application;
- FIG. 1B is a schematic diagram 2 of using a grating sensor to detect a falling area of a target in an embodiment of the present application;
- FIG. 2A is a schematic diagram of the principle of a vision-based target detection method according to an embodiment of the present application.
- 2B is a schematic diagram of the interior of the source turnover box in an embodiment of the application.
- FIG. 3 is a schematic diagram of an application scenario of an embodiment of the present application.
- FIG. 4 is an optional flowchart of a target object detection method according to an embodiment of the present application.
- FIG. 5 is an optional schematic flowchart of the manual marking area in an embodiment of the present application.
- FIG. 6 is another optional flowchart of the target detection method according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a target object detection device according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- a method or device including a series of elements not only includes the explicitly stated elements, but also other elements not expressly listed or inherent to the implementation of the method or apparatus.
- an element defined by the phrase “comprises a" does not preclude the presence of additional related elements (eg, steps in a method or a device) in which the element is included.
- a unit in an apparatus for example, a unit may be part of a circuit, part of a processor, part of a program or software, etc.).
- the target detection method provided in the embodiment of the present application includes a series of steps, but the target detection method provided by the embodiment of the present application is not limited to the steps described.
- the target detection device provided by the embodiment of the present application is not limited to the steps described.
- a series of modules are included, but the target object detection apparatus provided by the embodiments of the present application is not limited to including the modules explicitly described, and may also include modules that need to be set for obtaining relevant information or processing based on the information.
- the object picking process can be implemented by a photoelectric sensor-based detection method, a vision-based target detection method, or a motion model-based detection method, which will be exemplarily described below.
- detection sensors may be installed in the source tote, destination tote, and outside areas of the tote.
- the detection sensor may include a photoelectric beam sensor, a light curtain, or a grating sensor; with reference to FIGS. 1A and 1B ,
- the first transmitter 101 or the second transmitter 103 of the grating sensor is used to transmit the optical signal
- the first receiver 102 is used to receive the optical signal transmitted by the first transmitter 101
- the second receiver 104 is used to receive the second transmitter
- the area between the first transmitter 101 and the first receiver 102 is the detection area
- the area between the second transmitter 103 and the second receiver 104 is the detection area.
- the 102 or the light signal received by the second receiver 104 to determine whether there is a target entering the detection area, and then, combined with the state of the robot arm, it can determine whether there is a target object falling from the robot arm, and even the falling area of the target object can be detected .
- a visual sensor can be used to collect images inside the turnover box, so as to analyze the image data and combine with the picking process of the robotic arm to determine whether there is a target dropped from the robotic arm; referring to FIG. 2A , the first camera 201 and the first camera 201 The second camera 202 is a 3D camera, and the first camera 201 and the second camera 202 can be used to detect the red, green and blue (RGB) image in the turnover box 203 and obtain the depth information of the object in the image. The depth information of the object in the middle and the picking process of the robotic arm determine whether there is a target falling from the robotic arm.
- RGB red, green and blue
- the implementation of target tracking for an object is: using frame difference to calculate the moving area, then using feature matching and deep learning algorithms to detect and identify; finally using the target tracking method to track the target; for
- the robotic arm picks items into a box, as shown in FIG. 2B
- these objects may be a stock keeping unit (SKU). Therefore, for the detection and recognition of the target object, it is difficult to ensure the accuracy and accuracy in this scenario, which affects the subsequent target tracking accuracy.
- SKU stock keeping unit
- a motion model of the manipulator system can be established, and the pressure of the end picker (the device used for grasping objects at the end of the manipulator) and the movement speed of the manipulator end can be monitored in real time; when the pressure value changes suddenly, It means that the picked object has fallen off the end picker; at this time, according to the end speed of the robot arm, combined with the motion model of the robot arm, the landing point of the target object is estimated.
- the pressure of the end picker the device used for grasping objects at the end of the manipulator
- the movement speed of the manipulator end can be monitored in real time; when the pressure value changes suddenly, It means that the picked object has fallen off the end picker; at this time, according to the end speed of the robot arm, combined with the motion model of the robot arm, the landing point of the target object is estimated.
- the light curtain or grating sensor has a certain size limit, and the light has a gap, and there is a possibility of missed detection for smaller-sized items, for example, in the area between the emitter and the receiver, Only 80% of the area is the effective detection area; only by zoning management of the entire area covered by the robotic arm workstation, can you distinguish which area the target falls on; in order to cover the entire area to be detected, the size of the grating sensor is large, it will Affects the working space of the robotic arm; the grating sensor cannot detect the type of objects entering the area, so when other non-target objects enter, it will affect the entire picking process.
- Vision-based target detection methods are costly, difficult to implement, and have visual blind spots that may be missed. For example, referring to FIG. 2A , if there is only one camera B, there is a blind spot as shown in the figure. Vision-based target detection methods need to compare whether the target area has changed in real time to determine whether a target falls into it. In an implementation manner, it is first required that all the measured areas can be photographed by the vision sensor, and the environment is required to be unobstructed; or the number of the vision sensors is increased to reduce the occluded area. The vision sensor has a limited field of view and there are blind spots. For each container, 2 sets of cameras are required to cover the entire container area, as shown in Figure 2A; but for areas outside the container, more cameras are required at the same time. detection.
- the accuracy of the estimation result depends on the accuracy of the motion model, but the more accurate the motion model is, the more parameters it depends on, and the system complexity is higher; further, due to There are nonlinear factors, non-modelable parts and random events, so that the motion model can only be a simulation of the real scene, and the motion model cannot be completely consistent with the real scene.
- the estimated result is only a result of probability statistics, and cannot obtain the exact drop point of each drop event. To sum up, the reliability of the estimation results obtained by the detection method based on the motion model is low.
- the detection method based on the photoelectric sensor has the problem of missing detection
- the target detection method based on vision has the problem of low detection accuracy and reliability caused by the blind area of vision
- the reliability of the detection method based on the motion model is low;
- FIG. 3 is a schematic diagram of an application scenario of an embodiment of the present application.
- the robotic arm 301 is used to pick up the target from the source turnover box 302 and place the target in the destination turnover box 303; At least a stationary base 304 and an end 305 are included.
- the source turnover box 302 and the destination turnover box 303 are both containers for storing articles, so as to facilitate the handling of articles, the source turnover box 302 and the destination turnover box 303 represent two different types of turnover boxes, and other areas 306 represent the source turnover box 302. and the out-of-box area outside the destination turnover box 303 .
- the target object may be a commodity or other types of articles, which is not limited in this embodiment of the present application.
- the manipulator can be a 6-DOF manipulator, and the end 305 can be provided with a gripper or suction cup for picking up the target.
- the number of source turnover boxes 302 may be one or multiple; the number of destination turnover boxes 303 may be one or multiple, which is not limited in this embodiment of the present application.
- an image acquisition device 307 may also be deployed.
- the image acquisition device 307 is a hardware device for photographing the source turnover box 302 , the destination turnover box 303 and other areas 306 ; in some embodiments, the image acquisition device 307 It may be a device such as a camera, for example, the image capturing device 307 may be a consumer-grade camera.
- the image acquisition device 307 in order to realize the detection of the target in the picking process, it is necessary to firstly mark the area within the shooting range of the image acquisition device 307, and model each area in the camera coordinate system; referring to FIG. 3, the image acquisition
- the area within the shooting range of the device 307 can be divided into the source turnover box, the destination turnover box and other areas; in this way, when it is determined that the target object falls from the robot arm, it can be determined whether the target object falls inside the turnover box or falls on other areas. When it is determined that the target object falls inside the turnover box, it can further detect whether it falls in the source turnover box or the destination turnover box. After determining the falling area of the target object, it is convenient to formulate different coping strategies for the robotic arm according to different falling areas.
- a detection control system 308 may also be configured, the detection control system 308 and the robotic arm 301 may be connected through a network, and the detection control system 308 may send a control signal to the robotic arm 301 to control the work of the robotic arm
- the detection control system 308 can also receive various feedback data sent by the robot arm 301 .
- the detection control system 308 may form a communication connection with the image acquisition device 307 , for example, the detection control system 308 and the image acquisition device 307 may be connected through a network or a USB connection.
- the detection control system 308 can perform data interaction with the image acquisition device 307; for example, the image acquisition device 307 can track the target picked up by the end 305 of the robotic arm under the control of the detection control system 308, and determine the drop of the target.
- the image acquisition device 307 can also return the acquired image to the detection control system 308 .
- the detection control system 308 may use wired or wireless network communication to perform data interaction with the main control device (not shown in FIG. 3 ) to obtain instructions and send status data.
- detection control software may be deployed in the detection control system 308 to control the working states of the robotic arm 301 and the image acquisition device 307 .
- the detection control system 308 may be implemented based on terminals and/or servers, where the terminals may be thin clients, thick clients, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer Electronics, Network PCs, Small Computer Systems, etc.
- a server can be a small computer system, a large computer system, a distributed cloud computing technology environment including any of the above, and the like.
- Electronic devices such as servers, may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system.
- program modules may include routines, programs, object programs, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types.
- Computer systems/servers may be implemented in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located on local or remote computing system storage media including storage devices.
- FIG. 4 is an optional flowchart of a target object detection method according to an embodiment of the present application. As shown in FIG. 4 , the flowchart may include:
- Step 401 When it is determined that the robot arm picks up the target from any turnover box, and the height of the target is greater than the height of any turnover box, perform target tracking on the target to obtain real-time position information of the target.
- any one of the above turnover boxes may be the source turnover box; in some embodiments, the detection and control system may send a control signal to the robot arm after determining that the picking process starts, so that the robot arm can pass the end of the turnover box from the turnover box.
- the robotic arm can control the pose of the end of the robotic arm, and can also generate its own working state data, which can include the pose of the end of the robotic arm.
- the robotic arm after picking up the target at the end of the robotic arm, based on The pose of the end of the robotic arm can determine the pose of the object grasped by the end of the robotic arm; further, the robotic arm can return its own working status data to the detection control system, and the detection control system can judge based on the data returned by the robotic arm Whether the robotic arm has picked up the target.
- the data returned by the robotic arm may be status data of a pickup device or a grasping device at the end of the robotic arm, such as an air pressure value and the like.
- the height of the above-mentioned any one of the turnover boxes is the height of the top of the above-mentioned any of the turnover boxes, and the height of the above-mentioned any one of the turnover boxes may be a predetermined value.
- the detection control system can obtain the height of the target object according to the data returned by the robot arm; it should be understood that the robot arm can continuously return working status data to the detection control system, so that the detection control system can continuously obtain the target at the current moment height of the object.
- the detection control system can determine the size relationship between the height of the target at the current moment and the height of any turnover box, and when the height of the target at the current moment is less than or equal to the height of the turnover box, the target tracking process may not be started , continue to obtain the height of the target at the next moment; when the height of the target at the current moment is greater than the height of the turnover box, the target tracking process can be started.
- the target may be photographed by an image acquisition device, so as to realize the tracking of the target.
- the detection control system can control the image acquisition device to photograph the target, and the image The acquisition device can send the captured image to the detection control system; the detection control system can identify the target object based on the detection algorithm of deep learning, and then lock the target object when the target is tracked later.
- the target can also be photographed in other ways, so as to achieve the target tracking; for example, laser positioning and other methods can be used to track the target.
- the real-time position information of the target object may be the position coordinate information of the target object in the camera coordinate system, and the camera coordinate system indicates that the focus center of the image acquisition device is the origin, and the optical axis of the image acquisition device is the Z axis.
- the X-axis and Y-axis of the camera coordinate system are two mutually perpendicular coordinate axes of the image plane.
- Step 402 When it is determined according to the real-time position information of the target object that the target object falls from the robotic arm, the region where the target object falls is determined according to the position information of the target object at the current moment and the regional position information.
- the robotic arm can move the target from above the source turnover box to above the destination turnover box, and then can control the end to release the target object, so that the target object falls into the destination turnover box; detection
- the control system can determine whether the target object falls from the robot arm according to the position information of the target object obtained several times in a row.
- the robotic arm falls, the area where the target falls is determined according to the current position information of the target and the predetermined area position information.
- the above-mentioned regional location information may include areas where various types of turnover boxes are located, and the types of turnover boxes may be source turnover boxes and destination turnover boxes; exemplarily, the above-mentioned regional position information may also include other regions.
- steps 401 to 402 may be implemented based on a processor of the detection control system, and the above-mentioned processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Central Processing Unit (CPU), At least one of a controller, a microcontroller, and a microprocessor.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- DSPD Digital Signal Processing Device
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- CPU Central Processing Unit
- the embodiment of the present application does not need to set up a photoelectric sensor, a light curtain or a grating sensor, nor does it need to estimate the landing point of the target based on the motion model of the robotic arm system.
- the motion model of the robotic arm system does not need to be used for the drop point estimation, it can reduce the The possibility of false detection improves the accuracy of target detection;
- Target tracking is performed based on the area above the turnover box, rather than based on the area inside the turnover box. Therefore, the possibility of visual blind spots is reduced to a certain extent, and there is no need to track multiple identical targets in the turnover box. The object is detected, thereby improving the accuracy and reliability of the target landing point detection.
- the embodiments of the present application do not need to set up photoelectric sensors, light curtains or grating sensors, do not need to transform the on-site working environment, have low requirements on the on-site environment, can reduce the implementation cost to a certain extent, and have the characteristics of being easy to implement; based on the present application
- the technical solution of the embodiment can improve the object detection method on the basis of the existing detection control system, so as to improve the efficiency of the robot arm in picking objects.
- the initial position of the target can be determined according to the pose information of the end of the robot arm; according to the initial position of the target, Determine the height of the target.
- target tracking may be performed on the target object according to the initial position of the target object to obtain real-time position information of the target object.
- the initial area of the target can be determined according to the initial position of the target, so that the target tracking process can be started in combination with the initial area of the target.
- the initial area of the target object can be located, and further, in the initial stage of target tracking, the pose of the end of the robot arm can be combined with the position and attitude of the end of the robot arm.
- the information narrows the area that needs to be identified during target tracking, and realizes the rapid positioning of the target in the initial stage of target tracking.
- an image acquisition device may be used to shoot the target multiple times to obtain multiple captured images; the target is tracked according to each captured image.
- the target object in the multiple shot images can be tracked based on the detection and recognition score of deep learning, so as to obtain real-time position information of the target object.
- the target object when it is determined that the height of the target object is greater than the height of any one of the turnover boxes, the target object can be photographed, so as to perform target tracking, that is, the shooting is based on the area above the turnover box. , instead of photographing the area within the turnover box, thus reducing the possibility of visual blind spots to a certain extent, thereby improving the accuracy and reliability of target drop detection.
- the background in each shot image can be eliminated to obtain the background-eliminated result of each shot image; according to each shot As a result of eliminating the background of the image, the target is tracked; wherein, the background of the target represents a preset image of the background.
- the foreground and background of the captured image may be divided to obtain the background in the captured image; exemplarily, a neural network for distinguishing foreground and background may be pre-trained, and then, based on The trained neural network processes the captured image to obtain the background in the captured image.
- the background of the target is usually relatively simple.
- the method of background elimination can be used to filter, so as to accurately detect and track the target and improve the follow-up tracking.
- the detection accuracy can accurately obtain the target detection point.
- the robot arm station occupies a small area, and background objects can be preset according to actual needs.
- the preset background objects are solid-color background materials, which is beneficial to the background in each captured image. eliminate.
- the material of the preset background object may be cloth or other materials, which are not limited in the embodiments of the present application.
- the position of the target's falling point may be determined according to the relative positional relationship between the image acquisition device and the robotic arm and the pose information of the end of the robotic arm.
- the relative positional relationship between the image acquisition device and the robotic arm can be calibrated, and the robotic arm can be obtained by calibrating the relative positional relationship.
- the base coordinate system is the reference area relationship, and then, the precise position coordinates of the landing point of the target can be obtained through coordinate transformation; here, the base coordinate system of the manipulator can represent the coordinate system with the base of the manipulator as the origin.
- the position of the landing point of the target can be accurately obtained according to the relative positional relationship between the image acquisition device and the robotic arm and the position and attitude information of the end of the robotic arm.
- the picking scene can be marked with the area to determine the area where the target object falls; however, in the actual scene, the number and position of the source turnover box and the destination turnover box can be changed according to the order, for the same position Due to the placement of different turnover boxes, its positional significance has also changed; in the related art, when the target is detected on the ground, it is not possible to dynamically increase, decrease or adjust the detection area dynamically, which is not conducive to dynamically determining the landing of the target. point in the area.
- the type of each turnover box can be determined according to the attribute of each turnover box in the plurality of turnover boxes; according to the type of each turnover box, the type labeling information of each turnover box is obtained; The type labeling information of each turnover box, and the regional location information is obtained.
- the attributes of each tote may include at least one of the following: color, texture, shape, size.
- point cloud data can be obtained according to the RGB images captured each time; Recognition is performed to identify the type of the turnover box; in an example, different types of turnover boxes have different colors, in this case, the outline information of the turnover box can be calculated according to the point cloud data of the turnover box; then , the contour information points can be mapped to the RGB image to obtain the RGB image information of the turnover box. According to the RGB image information of the turnover box, the color of the turnover box can be determined, so that the type of the turnover box can be identified; in another example, Different types of turnover boxes have different shapes. For example, the shape of the source turnover box is a rectangle, and the shape of the destination turnover box is a rectangle. At this time, the shape information of the turnover box can be determined according to the contour information of the turnover box, so as to identify the turnover box. type.
- the type of each turnover box can be marked in the image, and then, combined with the position of each turnover box, the area where various turnover boxes are located can be determined, and the regional location information can be determined. .
- the embodiment of the present application can detect the turnover box in real time according to the captured image, thereby dynamically distinguishing the source turnover box and the destination turnover box, which is conducive to distinguishing the turnover box from other areas, and realizes the purpose of dynamically marking various areas. Furthermore, the purpose of dynamically adjusting the number and position of the turnover boxes can be achieved in specific engineering applications.
- each area of the picking scene may also be manually marked.
- FIG. 5 is a schematic diagram of an optional process for manually marking areas in an embodiment of the present application. As shown in FIG. 5 , the process may include: :
- Step 501 Determine each marked area.
- a plurality of labeling regions may be divided in the image of the picking scene, and for each labeling region, labeling and calibration are performed in a preset order; after each region is determined, step 502 may be performed.
- the point cloud data of the picking scene can be obtained by using a 3D camera, and then the obtained data can be filtered by using the height information, so that the edge contour of the turnover box can be identified;
- Each turnover box area and a non-turnover box area can be determined; the above-mentioned each marked area includes each turnover box area and a non-turnover box area, and the non-turnover box area is the above-mentioned other areas.
- Step 502 Determine whether the labeling of each labeling area has been completed, if so, jump to step 505, and if not, execute step 503.
- Step 503 move to the next unmarked area, and then perform step 504 .
- the image of the selected scene can be moved to the next unmarked area, and in some embodiments, the next marked area can be highlighted.
- Step 504 perform manual labeling on the current labeling area, and return to step 502 .
- the current marked area can be highlighted, and a selection dialog box pops up, where the selection dialog box is used for the user to select whether the current area is a destination turnover box, a source turnover box, or a non-turnover box area.
- the operator can choose to manually mark the current marked area.
- Step 505 Generate and display partition information.
- the partition information represents the label information of each label area in the image.
- Step 506 determine whether the partition information is correct, if so, end the process, if not, return to step 501 .
- the user of the operation information can determine whether the partition information is correct.
- the target object selection after determining the area where the target object is located, it can be determined whether the target object selection is successful according to the area where the target object is located; It is determined that the target object selection is successful; if the area where the target object is located is the source turnover box or other areas, it is determined that the target object selection fails.
- the embodiment of the present application is conducive to the follow-up processing according to the result of the target object selection, which is conducive to improving the efficiency of the selection. and success rate.
- FIG. 6 is another optional flowchart of a target object detection method according to an embodiment of the present application. As shown in FIG. 6 , the flowchart may include:
- Step 601 determine that the picking process starts, and then execute step 602 .
- Step 602 Perform target tracking on the target.
- Step 603 determine whether the target object falls, if yes, go to Step 604 , if not, go back to Step 602 .
- Step 604 Determine whether it falls into the source turnover box, if yes, go to Step 6041; if not, go to Step 605.
- Step 6041 It is determined that the object selection fails, and then, return to step 601 .
- the picking abnormality information can be reported to the detection control system.
- Step 605 determine whether it falls into another area, if yes, go to step 6051 ; if not, go to step 606 .
- Step 6051 It is determined that the target object selection has failed, and then returns to step 601 .
- the picking abnormality information may be reported to the detection control system.
- Step 606 determine whether it falls into the target turnover box, if not, go to step 6061 ; if yes, go to step 607 .
- Step 6061 It is determined that the object selection fails, and then, return to step 601 .
- the target object if the target object does not fall into the source turnover box, the destination turnover box and other areas, it means that the picking is abnormal, and the picking exception information can be reported to the detection control system.
- Step 607 Determine whether the picking is completed, if not, return to step 601, if yes, end the process.
- multiple targets can be sorted according to steps 601 to 606. If all targets are successfully selected, the process can be ended; if there are still targets that have not been successfully selected, the steps can be re-executed. Step 601 goes to step 607 until each target is successfully selected.
- the warehouse management system can generate robotic arm picking work orders according to upstream order requirements, and issue picking tasks to the detection control system.
- the detection and control system will obtain the relevant information of the target.
- the target will be detected and recognized and the target will be updated until the target falls off the end of the robot arm. At this time, the area information of the landing point will be output. Complete the target tracking process.
- step 601 it can be determined whether the predetermined regions where various types of turnover boxes are located are correct.
- FIG. 7 is a schematic flowchart of determining whether the area where various types of turnover boxes are located is correct in an embodiment of the application. As shown in FIG. 7 , the process may include:
- Step 701 Issue a picking task.
- the WCS can issue the picking task according to the robotic arm picking work order detection control system, and the picking task is used to represent the task of picking each target.
- Step 702 Determine whether the picking preparation work is completed, if yes, go to Step 703, if not, go to Step 702 again.
- the picking preparation work may include judging whether the robotic arm and the image capturing device are ready, and if so, determining that the picking preparation work is completed.
- Step 703 Obtain the regions where various types of turnover boxes are located.
- this step has been described in the above-mentioned contents, and will not be repeated here; after obtaining the regions where various types of turnover boxes are located, the information (including quantity and location information) of the source turnover box and the destination turnover box in the picking scene can be determined ).
- Step 704 Return to WCS for confirmation.
- the information of the source turnover box and the destination turnover box can be transmitted to the WCS, and the WCS pre-stores the information of the source turnover box and the destination turnover box in the picking scene, so that the information of the source turnover box and the destination turnover box can be processed. confirm.
- Step 705 Determine whether the information of the source turnover box and the destination turnover box is correct, if so, go to step 601 , if not, return to step 703 .
- an embodiment of the present application further proposes a target object detection device.
- FIG. 8 is a schematic structural diagram of a target object detection device according to an embodiment of the present application. As shown in FIG. 8 , the device may include:
- the first processing module 801 is configured to perform target tracking on the target object when it is determined that the robot arm picks up the target object from any turnover box, and the height of the target object is greater than the height of the any turnover box, and obtains the target object. Describe the real-time location information of the target;
- the second processing module 802 is configured to determine, according to the real-time position information of the target, that when the target falls from the robotic arm, determine the target according to the current position information and regional position information of the target The area where the object is located, and the area location information includes the area where various types of turnover boxes are located.
- the second processing module 802 is further configured to:
- the type of each turnover box is determined; according to the type of each turnover box, the type label information of each turnover box is obtained; according to each turnover box The type labeling information of the turnover box is used to obtain the location information of the area.
- the attributes of each turnover box include at least one of the following: color, texture, shape, and size.
- the first processing module 801 is configured to perform target tracking on the target, including:
- the target object is photographed multiple times by using the image acquisition device to obtain multiple photographed images; according to each photographed image, target tracking is performed on the target object.
- the first processing module 801 is configured to perform target tracking on the target according to each captured image, including:
- the second processing module 802 is further configured to:
- the position of the target's falling point is determined according to the relative positional relationship between the image acquisition device and the robotic arm and the posture information of the end of the robotic arm.
- the first processing module 801 is further configured to:
- the initial position of the target is determined according to the pose information of the end of the robot arm; the height of the target is determined according to the initial position of the target.
- the first processing module 801 is configured to perform target tracking on the target, including:
- target tracking is performed on the target to obtain real-time position information of the target.
- the second processing module 802 is further configured to:
- the target turnover box it is determined that the target object is successfully selected
- the area where the target object is located is a source turnover box or another area, it is determined that the target object selection fails, and the other area is an area other than the destination turnover box and the source turnover box.
- the above-mentioned first processing module 801 and the second processing module 802 can be implemented by a processor located in an electronic device, and the above-mentioned processor is an ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, or microprocessor. at least one of.
- each functional module in this embodiment may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of software function modules.
- the integrated unit is implemented in the form of a software functional module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of this embodiment is essentially or The part that contributes to the prior art or the whole or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium, and includes several instructions for making a computer device (which can be It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- the computer program instructions corresponding to a target object detection method in this embodiment may be stored on a storage medium such as an optical disc, a hard disk, a U disk, etc., when the computer program corresponding to a target object detection method in the storage medium
- a storage medium such as an optical disc, a hard disk, a U disk, etc.
- the embodiments of the present application also provide a computer program product
- the computer program product includes a non-transitory computer-readable storage medium storing a computer program
- the computer program enables the computer to execute any of the methods described in the foregoing method embodiments. Some or all of the steps of a method for detecting objects.
- FIG. 9 shows an electronic device 90 provided by an embodiment of the present application, which may include: a memory 91 , a processor 92 , and a memory 91 , which is stored in the memory 91 and can be processed during processing.
- a computer program running on the device 92 wherein,
- memory 91 for storing computer programs and data
- the processor 92 is configured to execute the computer program stored in the memory, so as to implement any one of the target object detection methods in the foregoing embodiments.
- the above-mentioned memory 91 may be a volatile memory (volatile memory), such as RAM; or a non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), hard disk (Hard Disk) Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to processor 92.
- volatile memory such as RAM
- non-volatile memory such as ROM, flash memory (flash memory), hard disk (Hard Disk) Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to processor 92.
- the above-mentioned processor 92 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor.
- the functions or modules included in the apparatuses provided in the embodiments of the present application may be used to execute the methods described in the above method embodiments.
- the functions or modules included in the apparatuses provided in the embodiments of the present application may be used to execute the methods described in the above method embodiments.
- the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
- the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
- a storage medium such as ROM/RAM, magnetic disk, CD-ROM
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
本申请实施例提出了一种目标物检测方法、装置、电子设备和计算机存储介质,该方法包括:在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
Description
相关申请的交叉引用
本申请基于申请号为202110077817.9、申请日为2021年1月20日、名称为“目标物检测方法、装置、电子设备和计算机存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本申请涉及目标检测技术,涉及但不限于一种目标物检测方法、装置、电子设备、计算机存储介质和计算机程序。
在相关技术中,对于物体拣选流程,可以采用基于光电传感器的检测方法、基于视觉的目标检测方法或基于运动模型的检测方法,估计物体的落点。
发明内容
本申请实施例期望提供目标物检测方法、装置、电子设备、计算机存储介质和计算机。
本申请实施例提供了一种目标物检测方法,所述方法包括:
在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;
根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
在本申请的一些实施例中,所述方法还包括:
根据多个周转箱中每个周转箱的属性,确定所述每个周转箱的类型;根据所述每个周转箱的类型,得到所述每个周转箱的类型标注信息;根据所述每个周转箱的类型标注信息,得出所述区域位置信息。
在本申请的一些实施例中,所述每个周转箱的属性包括以下至少之一:颜色、纹理、形状、尺寸。
在本申请的一些实施例中,所述对所述目标物进行目标跟踪,包括:
利用图像采集装置对所述目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对所述目标物进行目标跟踪。
在本申请的一些实施例中,所述根据每次拍摄图像,对所述目标物进行目标跟踪,包括:
对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对所述目标物进行目标跟踪;其中,所述目标物的背景表示预先设置的背景物的图像。
在本申请的一些实施例中,所述方法还包括:
在确定所述目标物的落点所在区域后,根据所述图像采集装置与所述机械臂的相对位置关系、以及所述机械臂的末端的位姿信息,确定目标物的落点的位置。
在本申请的一些实施例中,所述方法还包括:
在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定所述目标物的初始位置;根据所述目标物的初始位置,确定所述目标物的高度。
在本申请的一些实施例中,所述对所述目标物进行目标跟踪,包括:
根据所述目标物的初始位置,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息。
在本申请的一些实施例中,所述方法还包括:
所述目标物的落点所在区域为目的周转箱时,确定所述目标物拣选成功;
所述目标物的落点所在区域为来源周转箱或其它区域时,确定所述目标物拣选失败,所述其它区域为除所述目的周转箱和所述来源周转箱外的区域。
本申请实施例还提供了一种目标物检测装置,所述装置包括:
第一处理模块,配置为在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;
第二处理模块,配置为根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
在本申请的一些实施例中,所述第二处理模块,还配置为:
根据多个周转箱中每个周转箱的属性,确定所述每个周转箱的类型;根据所述每个周转箱的类型,得到所述每个周转箱的类型标注信息;根据所述每个周转箱的类型标注信息,得出所述区域位置信息。
在本申请的一些实施例中,所述每个周转箱的属性包括以下至少之一:颜色、纹理、形状、尺寸。
在本申请的一些实施例中,所述第一处理模块,配置为对所述目标物进行目标跟踪,包括:
利用图像采集装置对所述目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对所述目标物进行目标跟踪。
在本申请的一些实施例中,所述第一处理模块,配置为根据每次拍摄图像,对所述目标物进行目标跟踪,包括:
对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对所述目标物进行目标跟踪;其中,所述目标物的背景表示预先设置的背景物的图像。
在本申请的一些实施例中,所述第二处理模块,还配置为:
在确定所述目标物的落点所在区域后,根据所述图像采集装置与所述机械臂的相对位置关系、以及所述机械臂的末端的位姿信息,确定目标物的落点的位置。
在本申请的一些实施例中,所述第一处理模块,还配置为:
在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定所述目标物的初始位置;根据所述目标物的初始位置,确定所述目标物的高度。
在本申请的一些实施例中,所述第一处理模块,配置为对所述目标物进行目标跟踪,包括:
根据所述目标物的初始位置,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息。
在本申请的一些实施例中,所述第二处理模块,还配置为:
在所述目标物的落点所在区域为目的周转箱时,确定所述目标物拣选成功;
在所述目标物的落点所在区域为来源周转箱或其它区域时,确定所述目标物拣选失败,所述其它区域为除所述目的周转箱和所述来源周转箱外的区域。
本申请实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任意一种目标物检测方法。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述任意一种目标物检测方法。
本申请实施例还提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例的上述任意一种目标物检测方法。该计算机程序产品可以为一个软件安装包。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。
图1A为本申请的一个实施例中利用光栅传感器检测目标物掉落区域的示意图一;
图1B为本申请的一个实施例中利用光栅传感器检测目标物掉落区域的示意图二;
图2A为本申请的一个实施例中基于视觉的目标检测方法的原理示意图;
图2B为本申请的一个实施例中来源周转箱内部的示意图;
图3为本申请的一个实施例的应用场景的示意图;
图4为本申请的一个实施例的目标物检测方法的一个可选的流程图;
图5为本申请的一个实施例中对手工标注区域的一个可选的流程示意图;
图6为本申请的一个实施例的目标物检测方法的另一个可选的流程图;
图7为本申请的一个实施例中确定各类周转箱所在区域是否正确的流程示意图;
图8为本申请的一个实施例的目标物检测装置的组成结构示意图;
图9为本申请的一个实施例的电子设备的结构示意图。
以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所提供的实施例仅仅用以解释本申请,并不用于限定本申请。另外,以下所提供的实施例是用于实施本申请的部分实施例,而非提供实施本申请的全部实施例,在不冲突的情况下,本申请实施例记载的技术方案可以任意组合的方式实施。
需要说明的是,在本申请实施例中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的方法或者装置不仅包括所明确记载的要素,而且还包括没有明确列出的其他要素,或者是还包括为实施方法或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的方法或者装置中还存在另外的相关要素(例如方法中的步骤或者装置中的单元,例如的单元可以是部分电路、部分处理器、部分程序或软件等等)。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或C,可以表示:单独存在A,同时存在A和C,单独存在C这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。
例如,本申请实施例提供的目标物检测方法包含了一系列的步骤,但是本申请实施例提供的目标物检测方法不限于所记载的步骤,同样地,本申请实施例提供的目标物检测装置包括了一系列模块,但是本申请实施例提供的目标物检测装置不限于包括所明确记载的模块,还可以包括为获取相关信息、或基于信息进行处理时所需要设置的模块。
在相关技术中,对于物体拣选流程,可以采用基于光电传感器的检测方法、基于视觉的目标检测方法或基于运动模型的检测方法实现,下面分别进行示例性说明。
1)基于光电传感器的检测方法
在一些实施例中,可以在来源周转箱、目的周转箱和以及周转箱的外部区域安装检测传感器,例如,检测传感器可以包括光电对射传感器、光幕或光栅传感器;参照图1A和图1B,光栅传感器的第一发射器101或第二发射器103用于发射光信号,第一接收器102用于接收第一发射器101发射的光信号,第二接收器104用于接收第二发射器103发射的光信号,第一发射器101与第一接收器102之间的区域为检测区域,第二发射器103与第二接收器104之间的区域为检测区域,可以根据第一接收器102或第二接收器104接收的光信号判断是否有目标物进入检测区域,然后,可以结合机械臂的状态,判断是否有目标物从机械臂掉落,甚至可以检测到目标物的掉落区域。
2)基于视觉的目标检测方法
在一些实施例中,可以利用视觉传感器采集周转箱内部的图像,从而通过分析图像数据,并结合机械臂拣选流程判断是否有目标物从机械臂掉落;参照图2A,第一相机201和第二相机202为3D相机,可以利用第一相机201和第二相机202检测周转箱203内的红绿蓝(Red Green Blue,RGB)图像并获取图像中物体的深度(Depth)信息,可以根据图像中物体的深度信息和机械臂拣选流程判断是否有目标物从机械臂掉落。
在一些实施例中,针对物体进行目标跟踪的实现方式为:利用帧差来计算出运动的区域,然后利用特征匹配和深度学习的算方法进行检测和识别;最后利用目标跟踪方法跟踪目标;对于机械臂入箱拣选物品的场景而言,如图2B所示,在来源周转箱中可能存在多个相同的目标物204,例如,这些目标物可以是最小存货单位(Stock Keeping Unit,SKU)。因此对于目标物的检测识别,此种场景下,很难保证精度和准确性,从而影响后续的目标跟踪精度。
3)基于运动模型的检测方法
在一些实施例中,可以建立机械臂系统的运动模型,实时监视端拾器(机械臂末端用于抓取物体的器件)的压力和机械臂末端的运动速度;当发现压力值突然变化时,说明被拣拾目标物已经从端拾器上脱落;此时,根据机械臂的末端速度,结合机械臂的运动模型估算目标物的落点。
上述三种方法均可以检测出从端拾器上脱落的物体,并在特定条件下检测或估算出被拾目标物跌落的最终位置,但是,上述三种方案都存在一些局限性和缺点,下面进行说明。
参照图1A和图1B,光幕或光栅传感器都有一定的尺寸限制,且光线是有间隙的,对于尺寸较小的物品存在漏检的可能,例如,在发光器和接收器中间的区域,只有80%的区域是有效额检测区域;要对机械臂工作站覆盖的全部区域进行分区管理,才能区分出目标物跌落在哪个区域;为了覆盖欲检测的全部区域,光栅传感器的尺寸较大,会影响机械臂的工作空间;光栅传感器不能检测出进往前该区域的物体种类,因此,当有其它非目标物进入时,会对整个拣选流程造成影响。
基于视觉的目标检测方法成本高,实施难度大,存在视觉盲区,存在漏检的可能,例如,参照图2A,如果仅存在一个相机B,则存在如图所示的视觉盲区。基于视觉的目标检测方法需要实时比对目标区域是否有变化,来判断是有否目标物落入其中。在一种实现方式中,首先要求所有被测区域能被视觉传感器拍摄到,要求环境无遮挡;或通过增加视觉传感器的数目来减少被遮挡区域。视觉传感器的视野有限且存在盲区,对于每个周转箱而言,至于需要2组相机才能将覆盖整个周转箱区域,如图2A所示;但对于周转箱以外的区域,就需要更多相机同时检测。
进一步,采用基于视觉的目标检测方法时,需要尽量多且合理的布置图像传感器的空间位置,可以尽可能的减少视觉盲区的数量和范围;但不可能消除视觉盲区。当被测物体落入视觉盲区时,没有视觉传感器能捕获到它,从而造成漏检,检测结果不可靠。
在基于运动模型的检测方法估计目标物掉落区域时,估计结果的准确性取决于运动模型精确度,但是运动模型越精确,依赖的参数就越多,系统复杂度较高;进一步地,由于存在非线性因素、不可建模部分和随机事件,使得运动模型只能是真实场景的仿真,运动模型与真实场景不能完全一致。估计结果也只是一种概率统计的结果,并不能得到每一次掉落事件的准确落点。综上,基于运动模型的检测方法得到的估计结果的可靠性较低。
可以看出,基于光电传感器的检测方法存在漏检的问题,基于视觉的目标检测方法存在视觉盲区导致的检测准确性和可靠性较低的问题,基于运动模型的检测方法的可靠性较低;综上,在相关技术中估计物体的落点的方案存在可靠性和准确性较低的问题。
针对上述技术问题,提出本申请实施例的技术方案。
图3为本申请的一个实施例的应用场景的示意图,如图3所示,机械臂301用于从来源周转箱302中拾取目标物,并将目标物放置到目的周转箱303中;机械臂至少包括固定不动的基体304和末端305。来源周转箱302和目的周转箱303均为用于存放物品的容器,以便于搬运物品,来源周转箱302和目的周转箱303表示两种不同类型的周转箱,其它区域306表示除来源周转箱302和目的周转箱303外的箱外区域。
在一些实施例中,目标物可以是商品或其它类型的物品,本申请实施例对此并不进行限定。
在一些实施例中,机械臂可以是6自由度机械手,末端305可以设置夹具或吸盘,用于拾取目标物。
在一些实施例中,来源周转箱302的数量可以是一个,也可以是多个;目的周转箱303的数量可以是一个,也可以是多个,本申请实施例对此并不进行限定。
参照图3,还可以部署图像采集装置307,图像采集装置307为一个硬件设备,用于对来源周转箱302、目的周转箱303和其它区域306进行拍摄;在一些实施例中,图像采集装置307可以是相机等设备,示例性地,图像采集装置307可以是消费级的相机。
在一些实施例中,为了实现拣选流程中目标物的检测,需要首先对图像采集装置307拍摄范围内的区域进行分区标记,将各区域在相机坐标系中进行建模;参照图3,图像采集装置307拍摄范围内的区域可以分为来源周转箱、目的周转箱和其它区域;这样,在确定目标物从机械臂掉落时,可以确定目标物是掉落在周转箱内部还是掉落在其它区域,在确定目标物掉落在周转箱内部时,还可以更进一步检测掉落在来源周转箱,还是掉落在目的周转箱。在确定目标物掉落区域后,便于根据不同的掉落区域,制定机械臂的不同的应对策略。
在一些实施例中,参照图3,还可以配置检测控制系统308,检测控制系统308与机械臂301可以通过网络连接,检测控制系统308可以向机械臂301发送控制信号,以控制机械臂的工作状态;检测控制系统308还可以接收机械臂301发送的各类反馈数据。
检测控制系统308可以与图像采集装置307形成通信连接,示例性地,检测控制系统308与图像采集装置307可以通过网络或USB连接方式进行连接。检测控制系统308可以与图像采集装置307进行数据交互;示例性地,图像采集装置307可以检测控制系统308的控制下,对机械臂末端305拾取的目标物进行跟踪,并确定出目标物的掉落区域;图像采集装置307还可以将采集到的图像返回至检测控制系统308。
在一些实施例中,检测控制系统308可以采用有线或无线网络通讯的方式与主控设备(图3未示出)进行数据交互,以获取指令和发送状态数据。
在一些实施例中,检测控制系统308中可以部署检测控制软件,以实现对机械臂301和图像采集装置307的工作状态的控制。
在一些实施例中,检测控制系统308可以基于终端和/或服务器实现,这里,终端可以是瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消 费电子产品、网络个人电脑、小型计算机系统,等等。服务器可以是小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图4为本申请的一个实施例的目标物检测方法的一个可选的流程图,如图4所示,该流程可以包括:
步骤401:在确定机械臂从任意一个周转箱拾取目标物,且目标物的高度大于任意一个周转箱的高度时,对目标物进行目标跟踪,得到目标物的实时位置信息。
本申请实施例中,上述任意一个周转箱可以是来源周转箱;在一些实施例中,检测控制系统可以在确定拣选流程开始后,向机械臂发送控制信号,使机械臂通过末端从周转箱中拾取目标物;机械臂可以控制机械臂末端的位姿,也可以生成自身的工作状态数据,该工作状态数据可以包括机械臂末端的位姿,应理解,在机械臂末端拾取目标物后,基于机械臂末端的位姿,可以确定机械臂末端抓取的目标物的位姿;进一步地,机械臂可以将自身的工作状态数据返回至检测控制系统,检测控制系统可以根据机械臂返回的数据判断机械臂是否已拾取目标物。示例性地,机械臂返回的数据可以是机械臂末端拾取装置或抓取装置的状态数据,例如气压值等。
在一些实施例中,上述任意一个周转箱的高度为上述任意一个周转箱的顶部高度,上述任意一个周转箱的高度可以是预先确定的数值。
在一些实施例中,检测控制系统可以根据机械臂返回的数据,得出目标物的高度;应理解,机械臂可以向检测控制系统不断返回工作状态数据,从而使检测控制系统不断得到当前时刻目标物的高度。
在一些实施例中,检测控制系统可以判断当前时刻目标物的高度与任意一个周转箱的高度的大小关系,在当前时刻目标物的高度小于或等于周转箱的高度时,可以不启动目标跟踪流程,继续获取下一个时刻目标物的高度;在当前时刻目标物的高度大于周转箱的高度时,可以启动目标跟踪流程。
对于对目标物进行目标跟踪的实现方式,在一个示例中,参照图3,可以利用图像采集装置对目标物进行拍摄,从而实现对目标物的跟踪。
在一些实施例中,检测控制系统在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于任意一个周转箱的高度时,可以控制图像采集装置对目标物进行拍摄,图像采集装置可以将拍摄图像发送至检测控制系统;检测控制系统可以基于深度学习的检测算法识别出目标物,进而在后期目标跟踪时,锁定该目标物。
对于对目标物进行目标跟踪的实现方式,在另一个示例中,也可以利用其它方式对目标物进行拍摄,从而实现对目标物的跟踪;例如,可以采用激光定位等方式对目标物进行跟踪。
在一些实施例中,目标物的实时位置信息可以是相机坐标系中目标物的位置坐标信息,相机坐标系表示以图像采集装置的聚焦中心为原点,以图像采集装置的光轴为Z轴建立的三维直角坐标系,相机坐标系的X轴和Y轴为图像平面的两个互相垂直的坐标轴。
步骤402:根据目标物的实时位置信息确定目标物从机械臂掉落时,根据目标物当前时刻的位置信息以及区域位置信息,确定目标物的落点所在区域。
本申请的一些实施例中,机械臂可以将目标物从来源周转箱的上方移动至目的周转箱的上方,然后,可以控制末端松开目标物,使目标物掉落在目的周转箱内;检测控制 系统可以根据连续多次得到的目标物的位置信息判断目标物是否从机械臂掉落,如果确定目标物未从机械臂掉落,则继续获取目标物的实时位置信息,如果确定目标物从机械臂掉落,则根据目标物当前时刻的位置信息以及预先确定的区域位置信息,确定目标物的落点所在区域。
本申请实施例中,上述区域位置信息可以包括各类型的周转箱所在区域,周转箱的类型可以是来源周转箱和目的周转箱;示例性地,上述区域位置信息还可以包括其它区域。
在实际应用中,步骤401至步骤402可以基于检测控制系统的处理器实现,上述处理器可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。
可以理解地,本申请实施例无需设置光电对射传感器、光幕或光栅传感器,也无需基于机械臂系统的运动模型估计目标物的落点,进而,与相关技术中基于光电传感器的检测方法相比,降低了漏检的可能性,提升了目标物检测的可靠性;与相关技术中基于运动模型的检测方法相比,由于无需采用机械臂系统的运动模型进行落点估计,因而,可以降低错检的可能性,提升目标物检测的准确性;进一步地,本申请实施例可以在确定目标物的高度大于所述任意一个周转箱的高度,对目标物进行目标跟踪,也就是说,是基于周转箱之上的区域进行目标跟踪,而不是基于周转箱之内的区域进行目标跟踪,因而,在一定程度上降低了视觉盲区出现的可能性,并且无需对周转箱内多个相同的目标物进行检测,从而提高了目标物落点检测的准确性和可靠性。
进一步地,本申请实施例无需设置光电对射传感器、光幕或光栅传感器,无需改造现场工作环境,对现场环境要求低,在一定程度上可以降低实现成本,具有易于实现的特点;基于本申请实施例的技术方案,在现有的检测控制系统的基础上通过改进目标物检测方法,可以提供机械臂拣选物品的效率。
对于确定目标物高度的实现方式,示例性地,可以在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定目标物的初始位置;根据目标物的初始位置,确定目标物的高度。
在一些实施例中,可以根据目标物的初始位置,对目标物进行目标跟踪,得到所述目标物的实时位置信息。
这里,在确定目标物的初始位置后,可以根据目标物的初始位置确定目标物的初始区域,从而结合目标物的初始区域开启目标跟踪流程。
可以理解地,在通过机械臂的末端的位姿信息定位目标物的初始位置后,可以定位出目标物的初始区域,进而,在目标跟踪的初始阶段,可以结合过机械臂的末端的位姿信息缩小目标跟踪时需要识别的区域,实现目标跟踪的初始阶段目标物的快速定位。
对于对目标物进行目标跟踪的实现方式,在一些实施例中,可以利用图像采集装置对目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对目标物进行目标跟踪。
本申请实施例中,在得到多次拍摄图像后,可以基于深度学习的检测识别算分,对多次拍摄图像中的目标物进行跟踪,从而得到目标物的实时位置信息。
可以理解地,本申请实施例可以在确定目标物的高度大于所述任意一个周转箱的高度,对目标物进行拍摄,从而进行目标跟踪,也就是说,是基于周转箱之上的区域进行拍摄,而不对周转箱之内的区域进行拍摄,因而,在一定程度上降低了视觉盲区出现的可能性,从而提高了目标物落点检测的准确性和可靠性。
对于根据每次拍摄图像,对目标物进行目标跟踪的实现方式,在一些实施例中,可以对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对目标物进行目标跟踪;其中,目标物的背景表示预先设置的背景物的图像。
本申请实施例中,可以在得到拍摄图像后,对拍摄图像进行前景与背景的划分,得到拍摄图像中的背景;示例性地,可以预先训练用于区分前景与背景的神经网络,然后,基于训练完成的神经网络对拍摄图像进行处理,得到拍摄图像中的背景。
应理解,在目标物的高度大于周转箱的高度时,通常目标物的背景相对简单,此时,可以利用背景消除的方法进行过滤,以便于精确地进行目标检测与跟踪,有利于提高后续跟踪检测的精度,准确得到目标物检测的落点。
在一些实施例中,机械臂工位占地较小,可以根据实际需求预设背景物,例如,预设的背景物为纯色的背景材料,这样,有利于对每次拍摄图像中的背景进行消除。
在一些实施例中,预设的背景物的材料可以是布或其它材料,本申请实施例对此并不进行限定。
在一些实施例中,在确定目标物的落点所在区域后,可以根据图像采集装置与机械臂的相对位置关系、以及机械臂的末端的位姿信息,确定目标物的落点的位置。
本申请实施例中,如果不仅需要确定目标物的落点所在区域,还要得到目标物的落点的精确坐标,可以对图像采集装置和机械臂间的相对位置关系进行标定,得到以机械臂基坐标系为参考的区域关系,然后,可以通过坐标变换得到目标物的落点的精确位置坐标;这里,机械臂基坐标系可以表示以机械臂的基体为原点的坐标系。
可以理解地,本申请实施例可以根据图像采集装置与机械臂的相对位置关系、以及机械臂的末端的位姿信息,精确地得到目标物的落点的位置。
在相关技术中,可以对拣选场景进行区域标注,以确定目标物落点的区域;然而,在实际场景中,来源周转箱和目的周转箱的数量和位置是可以按订单变化,对于同一个位置由于放置不同的周转箱,其位置意义也就发生了变化;在相关技术中对目标物进行落点检测时,不能动态地动态增加、减少或调整检测区域,不利于动态地确定目标物的落点所在区域。
在本申请的一些实施例中,可以根据多个周转箱中每个周转箱的属性,确定每个周转箱的类型;根据每个周转箱的类型,得到每个周转箱的类型标注信息;根据每个周转箱的类型标注信息,得出区域位置信息。
在一些实施例中,每个周转箱的属性可以包括以下至少之一:颜色、纹理、形状、尺寸。
本申请实施例中,在不同类型的周转箱具有不同的属性时,可以根据每次拍摄的RGB图像,得到点云数据;根据每次拍摄的RGB图像对应的点云数据,对周转箱的属性进行识别,从而识别出周转箱的类型;在一个示例中,不同类型的周转箱具有不同的颜色,在这种情况下,可以根据周转箱的点云数据,计算得到周转箱的轮廓信息;然后,可以将轮廓信息点映射到RGB图像上,从而得到周转箱的RGB图像信息,根据周转箱的RGB图像信息,可以确定周转箱额颜色,从而可以识别周转箱的类型;在另一个示例中,不同类型的周转箱具有不同的形状,例如,来源周转箱的形状为矩形,目的周转箱的形状为矩形,此时,可以根据周转箱的轮廓信息,确定周转箱的形状信息,从而识别周转箱的类型。
在得到每个周转箱的类型后,可以在图像中对每个周转箱进行类型标注,进而,结合每个周转箱的位置,可以确定出各类周转箱所在区域,即可以确定出区域位置信息。
可以理解地,本申请实施例可以根据拍摄的图像,实时检测周转箱,从而动态区分来源周转箱箱和目的周转箱,有利于将周转箱与其它区域区分,实现动态标记各类区域 的目的,进而,可以在具体工程应用中实现动态调整周转箱的数目和位置的目的。
在一些实施例中,也可以对拣选场景的各区域进行手工标注,图5为本申请的一个实施例中对手工标注区域的一个可选的流程示意图,如图5所示,该流程可以包括:
步骤501:确定各个标注区域。
在一些实施例中,可以在拣选场景的图像中划分出多个标注区域,针对各个标注区域,按照预设顺序开始进行标注标定;在确定各个区域后,可以执行步骤502。
在另一些实施例中,可以利用3D相机获取拣选场景的点云数据,然后利用高度信息,对获取到的数据进行过滤,就可以将周转箱的边缘轮廓识别;在识别出该轮廓后,就可以确定各个周转箱区域和非周转箱区域;上述各个标注区域包括各个周转箱区域以及非周转箱区域,非周转箱区域为上述其它区域。
步骤502:判断各标注区域的标注是否均已完成,如果是,则跳转至步骤505,如果否,则执行步骤503。
步骤503:移动到下一个未标注区域,然后,执行步骤504。
本步骤中,可以拣选场景的图像中移动到下一个未标注区域,在一些实施例中,可以高亮显示下一个的标注区域。
步骤504:针对当前的标注区域进行人工标注,并返回至步骤502。
在一些实施例中,可以高亮显示当前的标注区域,并弹出选择对话框,选择对话框用于供用户选择当前区域是目的周转箱、来源周转箱还是非周转箱区域。在弹出选择对话框,操作人员可以进行选择,以实现对当前的标注区域的人工标注。
步骤505:生成并显示分区信息。
这里,分区信息表示图像中各个标注区域的标注信息。
步骤506:判断分区信息是否正确,如果是,则结束流程,如果否,则返回至步骤501。
本步骤中,可以由操作信息用户判断分区信息是否正确。
在一些实施例中,在确定目标物的落点所在区域后,可以根据目标物的落点所在区域判断目标物拣选是否成功;示例性地,如果目标物的落点所在区域为目的周转箱,则确定目标物拣选成功;如果目标物的落点所在区域为来源周转箱或其它区域时,确定目标物拣选失败。
可以理解地,在目标物的落点所在区域为目的周转箱的情况下,说明目标物准确按照要求达到目的地,此时可以确定目标物拣选成功;在标物的落点所在区域为来源周转箱或其它区域时,说明目标物未按照要求达到目的地,此时可以确定目标物拣选失败;进而,本申请实施例有利于根据目标物拣选的结果,进行后续处理,有利于提高拣选的效率和成功率。
图6为本申请的一个实施例的目标物检测方法的另一个可选的流程图,如图6所示,该流程可以包括:
步骤601:确定拣选流程开始,然后执行步骤602。
步骤602:对目标物进行目标跟踪。
本步骤的实现方式已经在前述实施例记载的内容中作出说明,这里不再赘述。
步骤603:判断目标物是否掉落,如果是,则执行步骤604,如果否,则返回至步骤602。
本步骤的实现方式已经在前述实施例记载的内容中作出说明,这里不再赘述。
步骤604:判断是否掉入来源周转箱,如果是,则执行步骤6041;如果否,则执行步骤605。
步骤6041:确定目标物拣选失败,然后,返回至步骤601。
在一些实施例中,在确定目标物掉入来源周转箱后,可以向检测控制系统上报拣选 异常信息。
步骤605:判断是否掉入其它区域,如果是,则执行步骤6051;如果否,则执行步骤606。
步骤6051:确定目标物拣选失败,然后,返回至步骤601。
在一些实施例中,在确定目标物掉入其它区域后,可以向检测控制系统上报拣选异常信息。
步骤606:判断是否掉入目的周转箱,如果否,则执行步骤6061;如果是,则执行步骤607。
步骤6061:确定目标物拣选失败,然后,返回至步骤601。
在一些实施例中,如果目标物并未落入来源周转箱、目的周转箱和其它区域,则说明拣选出现异常,此时可以向检测控制系统上报拣选异常信息。
步骤607:判断拣选是否完成,如果否,则返回至步骤601,如果是,则结束流程。
本申请实施例中,可以对多个目标物按照步骤601至步骤606进行拣选,如果各个目标物是否均拣选成功,则可以结束流程;如果还存在未成功拣选的目标物,则可以重新执行步骤601至步骤607,直至各个目标物均拣选成功。
可以看出,结合图5和图6所示的流程,在来源周转箱和目的周转箱确定的情况下,可以完成目标物的落点的自动检测,进一步地,通过与检测控制系统进行交互,并辅以拣选失败的应对策略,可以大大提高拣选的成功率和效率。
在一些应用场景中,仓库管理系统(Warehouse Control System,WCS)可以根据上游订单需求生成机械臂拣选工单,并下发拣选任务给检测控制系统。检测控制系统会得到目标物的相关信息,在拣选过程中,会检测并识别到目标物并进行目标更新,直到目标物从机械臂的末端上脱落为止,此时,输出落点的区域信息,完成目标跟踪过程。
在该应用场景中,在步骤601之前,可以判断预先确定的各类周转箱所在区域是否正确。
图7为本申请的一个实施例中确定各类周转箱所在区域是否正确的流程示意图,如图7所示,该流程可以包括:
步骤701:下发拣选任务。
这里,WCS可以根据机械臂拣选工单向检测控制系统下发拣选任务,拣选任务用于表示对各个目标物进行拣选的任务。
步骤702:判断拣选准备工作是否完成,如果是,则执行步骤703,如果否,则重新执行步骤702。
这里,拣选准备工作可以包括判断机械臂以及图像采集装置是否就绪,如果是,则确定拣选准备工作完成。
步骤703:得出各类周转箱所在区域。
本步骤的实现方式已经在前述记载的内容中作出说明,这里不再赘述;在得到各类周转箱所在区域,便可以确定拣选场景中来源周转箱和目的周转箱的信息(包括数量和位置信息)。
步骤704:回传至WCS进行确认。
本步骤中,可以将来源周转箱和目的周转箱的信息传输至WCS,WCS预先存储有拣选场景中来源周转箱和目的周转箱的信息,这样,可以对来源周转箱和目的周转箱的信息进行确认。
步骤705:判断来源周转箱和目的周转箱的信息是否正确,如果是,则可以执行步骤601,如果否,则返回至步骤703。
可以看出,结合图5、图6和图7所示的流程,可以在确认来源周转箱和目的周转箱的信息正确的情况下,实现目标物掉落区域的自动检测,可以适用于各种拣选场景。
在前述实施例提出的目标物检测方法的基础上,本申请实施例还提出了一种目标物检测装置。
图8为本申请的一个实施例的目标物检测装置的组成结构示意图,如图8所示,该装置可以包括:
第一处理模块801,配置为在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;
第二处理模块802,配置为根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
在本申请的一些实施例中,所述第二处理模块802,还配置为:
根据多个周转箱中每个周转箱的属性,确定所述每个周转箱的类型;根据所述每个周转箱的类型,得到所述每个周转箱的类型标注信息;根据所述每个周转箱的类型标注信息,得出所述区域位置信息。
在本申请的一些实施例中,所述每个周转箱的属性包括以下至少之一:颜色、纹理、形状、尺寸。
在本申请的一些实施例中,所述第一处理模块801,配置为对所述目标物进行目标跟踪,包括:
利用图像采集装置对所述目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对所述目标物进行目标跟踪。
在本申请的一些实施例中,所述第一处理模块801,配置为根据每次拍摄图像,对所述目标物进行目标跟踪,包括:
对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对所述目标物进行目标跟踪;其中,所述目标物的背景表示预先设置的背景物的图像。
在本申请的一些实施例中,所述第二处理模块802,还配置为:
在确定所述目标物的落点所在区域后,根据所述图像采集装置与所述机械臂的相对位置关系、以及所述机械臂的末端的位姿信息,确定目标物的落点的位置。
在本申请的一些实施例中,所述第一处理模块801,还配置为:
在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定所述目标物的初始位置;根据所述目标物的初始位置,确定所述目标物的高度。
在本申请的一些实施例中,所述第一处理模块801,配置为对所述目标物进行目标跟踪,包括:
根据所述目标物的初始位置,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息。
在本申请的一些实施例中,所述第二处理模块802,还配置为:
在所述目标物的落点所在区域为目的周转箱时,确定所述目标物拣选成功;
在所述目标物的落点所在区域为来源周转箱或其它区域时,确定所述目标物拣选失败,所述其它区域为除所述目的周转箱和所述来源周转箱外的区域。
上述第一处理模块801和第二处理模块802均可由位于电子设备中的处理器实现,上述处理器为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。
另外,在本实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
具体来讲,本实施例中的一种目标物检测方法对应的计算机程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种目标物检测方法对应的计算机程序指令被一电子设备读取或被执行时,实现前述实施例的任意一种目标物检测方法。
相应地,本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种目标物检测方法的部分或全部步骤。
基于前述实施例相同的技术构思,参见图9,其示出了本申请的一个实施例提供的一种电子设备90,可以包括:存储器91、处理器92及存储在存储器91上并可在处理器92上运行的计算机程序;其中,
存储器91,用于存储计算机程序和数据;
处理器92,用于执行所述存储器中存储的计算机程序,以实现前述实施例的任意一种目标物检测方法。
在实际应用中,上述存储器91可以是易失性存储器(volatile memory),例如RAM;或者非易失性存储器(non-volatile memory),例如ROM,快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,并向处理器92提供指令和数据。
上述处理器92可以为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。
在一些实施例中,本申请实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述
本申请所提供的各方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的各产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的各方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人 员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。
Claims (21)
- 一种目标物检测方法,应用于电子设备中,所述方法包括:在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
- 根据权利要求1所述的方法,其中,所述方法还包括:根据多个周转箱中每个周转箱的属性,确定所述每个周转箱的类型;根据所述每个周转箱的类型,得到所述每个周转箱的类型标注信息;根据所述每个周转箱的类型标注信息,得出所述区域位置信息。
- 根据权利要求2所述的方法,其中,所述每个周转箱的属性包括以下至少之一:颜色、纹理、形状、尺寸。
- 根据权利要求1所述的方法,其中,所述对所述目标物进行目标跟踪,包括:利用图像采集装置对所述目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对所述目标物进行目标跟踪。
- 根据权利要求4所述的方法,其中,所述根据每次拍摄图像,对所述目标物进行目标跟踪,包括:对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对所述目标物进行目标跟踪;其中,所述目标物的背景表示预先设置的背景物的图像。
- 根据权利要求4所述的方法,其中,所述方法还包括:在确定所述目标物的落点所在区域后,根据所述图像采集装置与所述机械臂的相对位置关系、以及所述机械臂的末端的位姿信息,确定目标物的落点的位置。
- 根据权利要求1所述的方法,其中,所述方法还包括:在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定所述目标物的初始位置;根据所述目标物的初始位置,确定所述目标物的高度。
- 根据权利要求7所述的方法,其中,所述对所述目标物进行目标跟踪,包括:根据所述目标物的初始位置,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息。
- 根据权利要求1至8任一项所述的方法,其中,所述方法还包括:所述目标物的落点所在区域为目的周转箱时,确定所述目标物拣选成功;所述目标物的落点所在区域为来源周转箱或其它区域时,确定所述目标物拣选失败,所述其它区域为除所述目的周转箱和所述来源周转箱外的区域。
- 一种目标物检测装置,所述装置包括:第一处理模块,配置为在确定机械臂从任意一个周转箱拾取目标物,且所述目标物的高度大于所述任意一个周转箱的高度时,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息;第二处理模块,配置为根据所述目标物的实时位置信息确定所述目标物从所述机械臂掉落时,根据所述目标物当前时刻的位置信息以及区域位置信息,确定所述目标物的落点所在区域,所述区域位置信息包括各类型的周转箱所在区域。
- 根据权利要求10所述的装置,其中,所述第二处理模块,还配置为:根据多个周转箱中每个周转箱的属性,确定所述每个周转箱的类型;根据所述每个周转箱的类型,得到所述每个周转箱的类型标注信息;根据所述每个周转箱的类型标注信息,得出所述区域位置信息。
- 根据权利要求11所述的装置,其中,所述每个周转箱的属性包括以下至少之一:颜色、纹理、形状、尺寸。
- 根据权利要求10所述的装置,其中,所述第一处理模块,配置为对所述目标物进行目标跟踪,包括:利用图像采集装置对所述目标物进行多次拍摄,得到多次拍摄图像;根据每次拍摄图像,对所述目标物进行目标跟踪。
- 根据权利要求13所述的装置,其中,所述第一处理模块801,配置为根据每次拍摄图像,对所述目标物进行目标跟踪,包括:对每次拍摄图像中的背景进行消除,得到每次拍摄图像的背景消除后结果;根据每次拍摄图像的背景消除后结果,对所述目标物进行目标跟踪;其中,所述目标物的背景表示预先设置的背景物的图像。
- 根据权利要求13所述的装置,其中,所述第二处理模块802,还配置为:在确定所述目标物的落点所在区域后,根据所述图像采集装置与所述机械臂的相对位置关系、以及所述机械臂的末端的位姿信息,确定目标物的落点的位置。
- 根据权利要求10所述的装置,其中,所述第一处理模块801,还配置为:在确定机械臂从周转箱拾取目标物后,根据机械臂的末端的位姿信息,确定所述目标物的初始位置;根据所述目标物的初始位置,确定所述目标物的高度。
- 根据权利要求16所述的装置,其中,所述第一处理模块801,配置为对所述目标物进行目标跟踪,包括:根据所述目标物的初始位置,对所述目标物进行目标跟踪,得到所述目标物的实时位置信息。
- 根据权利要求10至17任一项所述的装置,其中,所述第二处理模块802,还配置为:在所述目标物的落点所在区域为目的周转箱时,确定所述目标物拣选成功;在所述目标物的落点所在区域为来源周转箱或其它区域时,确定所述目标物拣选失败,所述其它区域为除所述目的周转箱和所述来源周转箱外的区域。
- 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1至9任一项所述的方法。
- 一种计算机存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现权利要求1至9任一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至9任一所述的目标物检测方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/248,617 US20230410362A1 (en) | 2021-01-20 | 2022-01-13 | Target object detection method and apparatus, and electronic device, storage medium and program |
EP22742072.6A EP4207068A4 (en) | 2021-01-20 | 2022-01-13 | TARGET OBJECT DETECTION METHOD AND DEVICE AS WELL AS ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110077817.9 | 2021-01-20 | ||
CN202110077817.9A CN113744305B (zh) | 2021-01-20 | 2021-01-20 | 目标物检测方法、装置、电子设备和计算机存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022156593A1 true WO2022156593A1 (zh) | 2022-07-28 |
Family
ID=78728227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/071867 WO2022156593A1 (zh) | 2021-01-20 | 2022-01-13 | 目标物检测方法、装置、电子设备、存储介质和程序 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230410362A1 (zh) |
EP (1) | EP4207068A4 (zh) |
CN (1) | CN113744305B (zh) |
WO (1) | WO2022156593A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744305B (zh) * | 2021-01-20 | 2023-12-05 | 北京京东乾石科技有限公司 | 目标物检测方法、装置、电子设备和计算机存储介质 |
CN117671801B (zh) * | 2024-02-02 | 2024-04-23 | 中科方寸知微(南京)科技有限公司 | 基于二分缩减的实时目标检测方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108116058A (zh) * | 2017-12-18 | 2018-06-05 | 华南智能机器人创新研究院 | 一种生产线中打码处理的方法及系统 |
CN110420445A (zh) * | 2019-07-23 | 2019-11-08 | 东南大学 | 一种基于增强现实的壁球训练方法及装置 |
CN110813784A (zh) * | 2019-11-11 | 2020-02-21 | 太仓红码软件技术有限公司 | 一种基于大数据的智能分拣控制方法及其系统 |
CN111185396A (zh) * | 2020-03-16 | 2020-05-22 | 中邮科技有限责任公司 | 一种分拣装置及分拣方法 |
WO2020244592A1 (zh) * | 2019-06-06 | 2020-12-10 | 杭州海康威视数字技术股份有限公司 | 物品取放检测系统、方法及装置 |
CN112091970A (zh) * | 2019-01-25 | 2020-12-18 | 牧今科技 | 具有增强的扫描机制的机器人系统 |
CN113744305A (zh) * | 2021-01-20 | 2021-12-03 | 北京京东乾石科技有限公司 | 目标物检测方法、装置、电子设备和计算机存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6310093B2 (ja) * | 2014-11-12 | 2018-04-11 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | 目標物体の検出方法、検出装置及びロボット |
CN109087328A (zh) * | 2018-05-31 | 2018-12-25 | 湖北工业大学 | 基于计算机视觉的羽毛球落点位置预测方法 |
JP7243163B2 (ja) * | 2018-12-10 | 2023-03-22 | 株式会社Ihi | 物体追跡装置 |
US10766141B1 (en) * | 2019-05-09 | 2020-09-08 | Mujin, Inc. | Robotic system with a coordinated transfer mechanism |
-
2021
- 2021-01-20 CN CN202110077817.9A patent/CN113744305B/zh active Active
-
2022
- 2022-01-13 US US18/248,617 patent/US20230410362A1/en active Pending
- 2022-01-13 WO PCT/CN2022/071867 patent/WO2022156593A1/zh unknown
- 2022-01-13 EP EP22742072.6A patent/EP4207068A4/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108116058A (zh) * | 2017-12-18 | 2018-06-05 | 华南智能机器人创新研究院 | 一种生产线中打码处理的方法及系统 |
CN112091970A (zh) * | 2019-01-25 | 2020-12-18 | 牧今科技 | 具有增强的扫描机制的机器人系统 |
WO2020244592A1 (zh) * | 2019-06-06 | 2020-12-10 | 杭州海康威视数字技术股份有限公司 | 物品取放检测系统、方法及装置 |
CN110420445A (zh) * | 2019-07-23 | 2019-11-08 | 东南大学 | 一种基于增强现实的壁球训练方法及装置 |
CN110813784A (zh) * | 2019-11-11 | 2020-02-21 | 太仓红码软件技术有限公司 | 一种基于大数据的智能分拣控制方法及其系统 |
CN111185396A (zh) * | 2020-03-16 | 2020-05-22 | 中邮科技有限责任公司 | 一种分拣装置及分拣方法 |
CN113744305A (zh) * | 2021-01-20 | 2021-12-03 | 北京京东乾石科技有限公司 | 目标物检测方法、装置、电子设备和计算机存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4207068A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP4207068A1 (en) | 2023-07-05 |
EP4207068A4 (en) | 2024-05-15 |
CN113744305B (zh) | 2023-12-05 |
US20230410362A1 (en) | 2023-12-21 |
CN113744305A (zh) | 2021-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111674817B (zh) | 仓储机器人的控制方法、装置、设备及可读存储介质 | |
JP7548516B2 (ja) | 自動パッケージスキャンおよび登録メカニズムを備えたロボットシステム、ならびにその動作方法 | |
WO2022156593A1 (zh) | 目标物检测方法、装置、电子设备、存储介质和程序 | |
JP7206421B2 (ja) | スマートフォークリフト及び容器位置姿勢ずれの検出方法 | |
WO2020034872A1 (zh) | 目标获取方法、设备和计算机可读存储介质 | |
JP5558585B2 (ja) | ワーク取り出し装置 | |
US9744669B2 (en) | Truck unloader visualization | |
US9694499B2 (en) | Article pickup apparatus for picking up randomly piled articles | |
EP3335090B1 (en) | Using object observations of mobile robots to generate a spatio-temporal object inventory, and using the inventory to determine monitoring parameters for the mobile robots | |
US10958895B1 (en) | High speed automated capture of 3D models of packaged items | |
JP2023529878A (ja) | コンテナの取り出し方法、装置、システム、ロボットおよび記憶媒体 | |
US20230044420A1 (en) | Systems and methods for object detection | |
US11981518B2 (en) | Robotic tools and methods for operating the same | |
US20240221350A1 (en) | Method and computing system for generating a safety volume list for object detection | |
US20210216767A1 (en) | Method and computing system for object recognition or object registration based on image classification | |
CN114170442A (zh) | 机器人空间抓取点的确定方法及装置 | |
CN111612837A (zh) | 物料整理方法及物料整理设备 | |
CN112288038A (zh) | 基于图像分类的物体识别或物体注册的方法及计算系统 | |
CN112040124A (zh) | 数据采集方法、装置、设备、系统和计算机存储介质 | |
CN113785299A (zh) | 用于发现对象的图像获取设备 | |
JP7191352B2 (ja) | 物体検出を実施するための方法および計算システム | |
CN118770801A (zh) | 一种基于视觉分析的自动化堆垛机调控方法和系统 | |
CN118343510A (zh) | 一种拆垛方法、系统、装置、设备以及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22742072 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022742072 Country of ref document: EP Effective date: 20230328 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |