WO2023168955A1 - 拾取位姿信息确定方法、装置、设备和计算机可读介质 - Google Patents
拾取位姿信息确定方法、装置、设备和计算机可读介质 Download PDFInfo
- Publication number
- WO2023168955A1 WO2023168955A1 PCT/CN2022/128549 CN2022128549W WO2023168955A1 WO 2023168955 A1 WO2023168955 A1 WO 2023168955A1 CN 2022128549 W CN2022128549 W CN 2022128549W WO 2023168955 A1 WO2023168955 A1 WO 2023168955A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- connected domain
- image
- target
- split
- domains
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000011218 segmentation Effects 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 51
- 238000003708 edge detection Methods 0.000 claims abstract description 50
- 230000008569 process Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 102000002067 Protein Subunits Human genes 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- Embodiments of the present disclosure relate to the field of computer technology, and in particular to methods, devices, equipment and computer-readable media for determining posture information.
- Automatic depalletizing means that the automatic depalletizing equipment performs target detection on the items in the source area based on the received depalletizing and palletizing tasks based on visual guidance, and then picks up the corresponding number of items in the source area.
- a technology that takes information out and puts it into a designated destination area.
- mathematical solution methods traditional computer vision algorithms or deep learning algorithms are often used to detect objects in the source area.
- Some embodiments of the present disclosure propose methods, devices, equipment, and computer-readable media for determining pickup pose information to solve one or more of the technical problems mentioned in the background art section above.
- some embodiments of the present disclosure provide a method for determining pickup pose information.
- the method includes: generating an edge detection image using a color image and a depth image taken for a source area; and performing a connected domain operation on the edge detection image. segmentation processing to obtain a connected domain segmented image, wherein the connected domain segmented image includes at least one connected domain; classifying the connected domains in the connected domain segmented image to obtain a picked connected domain set; determining each of the picked connected domain sets The picking pose information of the items represented by the picked connected domains is used to obtain the picking pose information set.
- some embodiments of the present disclosure provide a device for determining pickup pose information.
- the device includes: a generation unit configured to generate an edge detection image using a color image and a depth image taken for a source area; a segmentation unit, is configured to perform connected domain segmentation processing on the above-mentioned edge detection image to obtain a connected domain segmentation image, wherein the above-mentioned connected domain segmentation image includes at least one connected domain; the classification unit is configured to perform connected domain segmentation processing on the above-mentioned connected domain segmentation image.
- the determination unit is configured to determine the picked-up pose information of the items represented by each picked-up connected domain in the above-mentioned set of picked-up connected domains to obtain a set of picked-up pose information.
- some embodiments of the present disclosure provide an electronic device, including: at least one processor; a storage device on which at least one program is stored, and when the at least one program is executed by at least one processor, at least one process
- the device implements the method described in any implementation manner of the first aspect above.
- some embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein when the program is executed by a processor, the method described in any implementation manner of the first aspect is implemented.
- Figure 1 is a schematic diagram of an application scenario of a method for determining pickup pose information according to some embodiments of the present disclosure
- Figure 2 is a flow chart of some embodiments of a method for determining pickup pose information according to the present disclosure
- Figure 3 is a flow chart of other embodiments of a method for determining pickup pose information according to the present disclosure
- Figure 4 is a schematic diagram of determining the set of picked connected domains in other embodiments of the picking pose information determining method according to the present disclosure
- Figure 5 is a schematic structural diagram of some embodiments of the device for determining posture information of the present disclosure
- Figure 6 is a schematic structural diagram of an electronic device suitable for implementing some embodiments of the present disclosure.
- some embodiments of the present disclosure propose methods and devices for determining pickup pose information, which can avoid using a priori information of items and improve the accuracy of target detection, thereby enabling more accurate determination Pick up pose information to improve the success rate of automatic depalletizing.
- Figure 1 is a schematic diagram of an application scenario of a method for determining pickup pose information according to some embodiments of the present disclosure.
- the computing device 101 can generate an edge detection image 105 using the color image 103 and the depth image 104 taken for the source area 102 .
- the computing device 101 can perform connected domain segmentation processing on the edge detection image 105 to obtain a connected domain segmented image 106, where the connected domain segmented image 106 includes at least one connected domain.
- the computing device 101 can perform classification processing on the connected domains in the above-mentioned connected domain segmentation image 106 to obtain the picked connected domain set 107.
- the computing device 101 can determine the picking pose information of the items represented by each picking connected domain in the above-mentioned picking connected domain set 107, and obtain the picking pose information set 108.
- the above-mentioned computing device 101 may be hardware or software.
- the computing device When the computing device is hardware, it can be implemented as a distributed cluster composed of multiple servers or terminal devices, or it can be implemented as a single server or a single terminal device.
- the computing device When the computing device is embodied as software, it can be installed in the hardware device listed above. It may be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. There are no specific limitations here.
- the method for determining the picking pose information includes the following steps:
- Step 201 Generate an edge detection image using the color image and depth image taken for the source area.
- the above-mentioned color image and the above-mentioned depth image may be images captured at the same time.
- the above-mentioned color image may be a color image such as an RGB (red, green, and blue) image captured by an ordinary two-dimensional camera.
- the above-mentioned depth image can be captured by a three-dimensional camera, or can be converted from an image captured by the above-mentioned two-dimensional camera.
- the above-mentioned source area may be an area where items that need to be moved or transferred are placed.
- the execution subject of the pickup pose information determination method uses the color image and depth image taken for the source area to generate an edge detection image, which may include the following steps:
- the first step is to convert the above color image into the YCbCr color space to obtain the YCbCr image.
- the Y gradient image, Cb gradient image and Cr gradient image are generated using the brightness component Y, color components Cb and Cr of the above YCbCr image.
- each gradient image can be generated using the 8-direction circular edge detection operator.
- the third step is to use the above depth image to generate a depth gradient map.
- the depth value of each pixel in the depth image can be normalized, and the normalized depth value can be used to generate a depth gradient map.
- the above-mentioned 8-direction circular edge detection operator can be used to generate the above-mentioned depth gradient map.
- the fourth step is to fuse the above Y gradient image, the above Cb gradient image, the above Cr gradient image and the above depth gradient map to obtain a target gradient map.
- each gradient image can be fused through linear weighting.
- the fifth step is to use the maximum value among the 8-directional gradients of the above target gradient map as the edge detection result to obtain the edge detection image.
- Step 202 Perform connected domain segmentation processing on the edge detection image to obtain a connected domain segmented image.
- the execution subject may perform connected domain segmentation processing on the edge detection image to obtain a connected domain segmentation image.
- the above-mentioned connected domain segmentation image includes at least one connected domain.
- the connected domain segmentation algorithm can be used to perform connected domain segmentation processing on the above edge detection image.
- the above-mentioned connected domain segmentation algorithm includes but is not limited to at least one of the following Two-Pass method (two-pass scanning method), Seed-Filling (seed filling) method, etc.
- Step 203 Classify the connected domains in the connected domain segmentation image to obtain a set of picked connected domains.
- the execution subject performs classification processing on the connected domains in the connected domain segmentation image to obtain a set of picked connected domains, which may include the following steps:
- the first step is to perform image expansion processing on the edges of each connected domain in the above connected domain segmentation image to obtain the expanded connected domain segmentation image.
- the expanded connected domains whose ratio between the actual area and the preset area value in the above expanded connected domain segmentation image meet the preset conditions are determined as picked connected domains, and a set of picked connected domains is obtained.
- the above preset condition may be that the ratio is within a preset range.
- the above preset range can be set according to actual application conditions.
- the above preset range may be [0.8, 1.2].
- the above-mentioned preset area value may be the actual area value of each item in the above-mentioned source area.
- the actual area of the above dilated connected domain can be determined by the following sub-steps:
- the first sub-step is to determine the average depth value of each pixel in the area corresponding to the inflated connected domain in the depth image as the distance between the item represented by the inflated connected domain and the camera that captured the depth image. .
- the second sub-step is to convert the area of the expanded connected domain in the image into an actual area using the distance and the internal parameters of the camera that captured the depth image.
- Step 204 Determine the picking pose information of the items represented by each picking connected domain in the set of picked connected domains, and obtain a set of picking pose information.
- the above-mentioned execution subject determines the picking pose information of the items represented by each picking-up connected domain in the above-mentioned picking-up connected domain set, which may include the following steps:
- the internal and external parameters of the camera that captured the above color image are used to convert each two-dimensional coordinate in the above-mentioned picked connected domain into a three-dimensional coordinate, and a three-dimensional coordinate set is obtained.
- the above three-dimensional coordinate set is subjected to plane fitting processing to obtain the item fitting plane equation.
- the third step is to convert the two-dimensional center point coordinates of the picked connected domain into three-dimensional center point coordinates using the intrinsic and extrinsic parameters of the camera that captured the color image.
- the fourth step is to use the normal vector of the above-mentioned item fitting plane equation and the above-mentioned three-dimensional center point coordinates as the picking pose information.
- the above-mentioned execution subject may also send the above-mentioned picking pose information set to the automatic depalletizing equipment.
- the above-mentioned embodiments of the present disclosure have the following beneficial effects: through the picking pose information determination methods of some embodiments of the present disclosure, the use of a priori information of the item can be avoided and the accuracy of target detection can be improved, thereby making it more accurate Determine the picking position information and improve the success rate of automatic depalletizing.
- the reason why the accuracy of target detection is low and it is difficult to determine the picking position information more accurately is that: mathematical solution methods and traditional computer vision algorithms rely on the prior information of the item, and the target detection of deep learning methods is accurate.
- the degree depends on the amount of data used to train the model.
- the pickup pose information determination method of some embodiments of the present disclosure uses color images and depth images taken for the source area to generate edge detection images.
- the connected domains in the edge detection image are classified and processed to obtain a set of picked connected domains. Therefore, processing is performed on the basis of the edge detection image, and the picked connected domain in the obtained picked connected domain set is used as the final more accurate target detection result. Furthermore, the picking pose information generated based on the picking connected domains in the picking connected domain set also has high accuracy. Because in the above process of determining the pickup pose information, the prior information of the item is not used. Therefore, the technical problem of poor adaptability of the automatic depalletizing method caused by frequent collection of prior information caused by using prior information of items and long collection time is avoided.
- the connected domains are classified, thereby improving the accuracy of the target detection results.
- the picking position information can be determined more accurately and the success rate of automatic depalletizing can be improved.
- a process 300 of another embodiment of a method for determining pickup pose information is shown.
- the process 300 of the method for determining the pickup pose information includes the following steps:
- Step 301 Generate an edge detection image using the color image and depth image taken for the source area.
- the above-mentioned color image and the above-mentioned depth image may be images captured at the same time.
- the above-mentioned color images may be RGB images captured by ordinary two-dimensional cameras, etc.
- the above-mentioned depth image can be captured by a three-dimensional camera, or can be converted from an image captured by the above-mentioned two-dimensional camera.
- the above-mentioned source area may be an area where items that need to be moved or transferred are placed.
- the execution subject of the pickup pose information determination method uses the color image and depth image taken for the source area to generate an edge detection image, which may include the following steps:
- Step 3011 Extract the region of interest in the depth image.
- the execution subject extracts the region of interest in the depth image, which may include the following steps:
- the source area plane equation is generated using the position information of the above source area and the above depth image.
- the above source area location information may include three-dimensional coordinates of each corner point of the source area.
- the above source area location information may be collected and stored in advance.
- the range of the above-mentioned source area can be determined using the three-dimensional coordinates of each corner point of the above-mentioned source area.
- the source area plane equation can be generated using the three-dimensional coordinates of each corner point of the source area and the depth information of the depth image.
- the second step is to use the depth information of the pixels in the depth image and the plane equation of the source area to select the pixels located in the source area from the depth image as the target pixels to obtain a set of target pixels.
- the distance between the pixel and the plane equation of the source area can be determined based on the depth information of each pixel in the depth image. If the distance is a non-negative number, it can be determined that the pixel is located in the source area.
- the third step is to convert each target pixel in the above target pixel set into point cloud data to obtain a point cloud data set.
- the fourth step is to perform plane fitting processing on the above point cloud data set to obtain the fitting plane equation.
- the fifth step is to select the point cloud data that satisfies the third preset condition from the above point cloud data set as the target point cloud data according to the above fitting plane equation, and obtain the target point cloud data set.
- the above-mentioned third preset condition may be that the distance between the point represented by the point cloud data and the above-mentioned fitting plane equation is greater than the target distance value.
- the above-mentioned target distance value may be the arithmetic mean of the distances between the points represented by each point cloud data in the point cloud data set and the above-mentioned fitting plane equation.
- the sixth step is to use the internal parameters of the camera that captured the depth image to convert the target point cloud data in the target point cloud data set into a two-dimensional coordinate system to obtain a two-dimensional coordinate set.
- the seventh step is to generate a region of interest based on the above two-dimensional coordinate set.
- the execution subject may determine the minimum circumscribed polygon of the point set corresponding to the two-dimensional coordinate set as the area of interest.
- Step 3012 Mask the color image using the region of interest to obtain a mask image.
- the above-mentioned mask processing may be to set the pixel values of pixels located outside the above-mentioned area of interest in the above-mentioned color image to a preset value.
- the above preset values can be set according to actual conditions and are not limited here.
- the above-mentioned preset value may be 0.
- Step 3013 Perform edge detection processing on the mask image to obtain an edge detection image.
- An edge detection operator can be used to perform edge detection processing on the above mask image.
- the above-mentioned edge detection operators may include but are not limited to: Sobel operator, Prewitt operator and Roberts operator.
- Step 302 Perform connected domain segmentation processing on the edge detection image to obtain a connected domain segmented image.
- step 302 the specific implementation manner of step 302 and the technical effects brought by it can be referred to step 202 in the embodiments corresponding to Figure 2, which will not be described again here.
- Step 303 Classify the connected domains in the connected domain segmentation image to obtain a set of picked connected domains.
- the execution subject performs classification processing on the connected domains in the connected domain segmentation image to obtain a set of picked connected domains, which may include the following steps:
- Step 3031 Determine the reference area value based on the area value of each connected domain in the connected domain segmentation image.
- the mode among the area values of each connected domain in the above-mentioned connected domain segmentation image can be determined as the reference area value.
- Step 3032 Use the reference area value to classify each connected domain in the connected domain segmentation image to obtain a set of target connected domains, a set of connected domains to be split, and a set of connected domains to be spliced.
- the ratio of the area value of each connected domain in the connected domain segmentation image to the above-mentioned reference area value can be determined to obtain an area ratio set. Then, selected connected domains whose area ratio is greater than or equal to the first area value and smaller than the second area value are selected from the above-mentioned connected domain segmentation images as the connected domains to be spliced, and a set of connected domains to be spliced is obtained. Next, a connected domain whose area ratio is greater than or equal to the second area value and less than the third area value is selected from the above-mentioned connected domain segmentation image as the target connected domain, and a target connected domain set is obtained.
- a connected domain whose area ratio is greater than or equal to the third area value and less than the fourth area value is selected from the above-mentioned connected domain segmentation image as the connected domain to be split, and a set of connected domains to be split is obtained.
- the above-mentioned first area value, second area value and third area value can be set according to actual applications, and are not limited here.
- the above-mentioned first area value may be 0.2
- the above-mentioned second area value may be 0.8
- the above-mentioned third area value may be 1.2
- the above-mentioned third area value may be 10.
- Step 3033 Perform clustering processing on the target connected domain set, the connected domain set to be split, and the connected domain set to be spliced to obtain a picked connected domain set.
- the execution subject performs clustering on the target connected domain set, the connected domain set to be split, and the connected domain set to be spliced to obtain a picked connected domain set, which may include the following steps:
- each connected domain to be split in the connected domain set to be split is clustered based on the reference area value to obtain a first clustered connected domain set.
- a clustering algorithm can be used to perform clustering processing on the above-mentioned connected domains to be split.
- the above-mentioned clustering algorithms may include but are not limited to K-Means (K-means) clustering algorithm, RCF (Richer Convolutional Features, rich convolutional features) network, etc.
- the above execution subject performs clustering processing on each connected domain to be split in the connected domain set to be split based on the reference area value to obtain a first clustered connected domain set, which may include the following sub-steps:
- the first sub-step is to determine the number of split clusters based on the ratio of the area of the connected domain to be split and the reference area value. Among them, the above ratio can be rounded to obtain the number of split clusters.
- the second sub-step is to split the above-mentioned connected domains to be split based on the above-mentioned split cluster numbers to obtain each first cluster connected domain.
- the above-mentioned clustering algorithm can be used to split the above-mentioned connected domains to be split.
- the second step is to add the first clustered connected domain that satisfies the first preset condition in the first clustered connected domain set as the target connected domain and add it to the target connected domain set.
- the first preset condition may be that the ratio of the area of the first cluster connected domain to the reference area value is greater than or equal to the second area value and less than the third area value.
- the third step is to add the first clustered connected domain in the first clustered connected domain set that satisfies the second preset condition as the connected domain to be spliced to the set of connected domains to be spliced.
- the second preset condition may be that the ratio of the area of the first cluster connected domain to the reference area value is greater than or equal to the first area value and less than the second area value.
- the above execution subject performs clustering processing on the above target connected domain set, the above to-be-split connected domain set and the above to-be-spliced connected domain set to obtain the picked connected domain set, and may also include the following steps:
- each connected domain to be spliced in the above set of connected domains to be spliced is clustered to obtain a second set of clustered connected domains.
- the above clustering algorithm can be used to perform clustering processing on each connected domain to be spliced in the set of connected domains to be spliced.
- the second step is to add the second cluster connected domain in the second cluster connected domain set that satisfies the first preset condition as the target connected domain to the target connected domain set.
- the third step is to determine the above target connected domain set as the picked connected domain set.
- clustering processing can be performed on each connected domain to be spliced in the connected domain set 401 to be spliced to obtain a second clustered connected domain set 402 .
- the second clustered connected domain in the second clustered connected domain set 402 that satisfies the first preset condition 403 can be added to the target connected domain set 404 as a target connected domain.
- the above-mentioned target connected domain set 404 can be determined as the picked connected domain set 405.
- the connected domains in the edge detection image can be divided and processed through clustering processing, thereby achieving more accurate target recognition.
- Step 304 Determine the picking pose information of the items represented by each picking connected domain in the set of picked connected domains, and obtain a set of picking pose information.
- step 304 the specific implementation manner of step 304 and the technical effects brought by it can be referred to step 204 in the embodiments corresponding to Figure 2, which will not be described again here.
- the process 300 of the method for determining the pickup pose information in some embodiments corresponding to Figure 3 embodies the use of a clustering algorithm to determine the preliminary edge detection results.
- —Edge detection images are clustered. Therefore, the solutions described in these embodiments can more accurately divide connected domains in edge detection images through clustering processing. Thus, it is convenient to determine the picking pose based on the accurately divided connected domain.
- the present disclosure provides some embodiments of a device for picking up pose information and determining. These device embodiments correspond to those method embodiments shown in Figure 2.
- the device can be applied in various electronic devices.
- the pickup pose information determination device 500 of some embodiments includes: a generation unit 501 , a segmentation unit 502 , a classification unit 503 and a determination unit 504 .
- the generation unit 501 is configured to generate an edge detection image using the color image and depth image taken for the source area
- the segmentation unit 502 is configured to perform connected domain segmentation processing on the edge detection image to obtain a connected domain segmentation image, Wherein, the above-mentioned connected domain segmentation image includes at least one connected domain
- the classification unit 503 is configured to classify the connected domains in the above-mentioned connected domain segmentation image to obtain a set of picked-up connected domains
- the determination unit 504 is configured to determine the above-mentioned picked-up connected domain set.
- the picking pose information of each item represented by the connected domain set in the connected domain set is obtained to obtain the picking pose information set.
- the above-mentioned pickup posture information determining device 500 further includes a sending unit configured to send the above-mentioned pickup posture information set to the automatic depalletizing equipment.
- the above-mentioned generation unit 501 includes: an extraction subunit, a mask subunit and an edge detection subunit.
- the extraction subunit is configured to extract the area of interest in the above-mentioned depth image
- the masking subunit is configured to use the above-mentioned area of interest to perform masking processing on the above-mentioned color image to obtain a mask image
- the edge detection subunit is configured to perform edge detection processing on the above mask image to obtain an edge detection image.
- the above-mentioned classification unit 503 includes a first determination sub-unit and a classification sub-unit.
- the first determination subunit is configured to determine the reference area value based on the area value of each connected domain in the connected domain segmentation image
- the classification subunit is configured to use the reference area value to determine the reference area value in the connected domain segmentation image.
- the clustering subunit is configured to classify the target connected domain set, the connected domain set to be split, and the connected domain set to be spliced.
- the set of connected domains to be spliced is subjected to clustering processing to obtain a set of picked connected domains.
- the above-mentioned clustering subunit further includes: a first clustering module, a first joining module and a second joining module.
- the first clustering module is configured to perform clustering processing on each connected domain to be split in the connected domain set to be split based on the reference area value to obtain a first clustered connected domain set;
- the first join The module is configured to add the first clustered connected domain that satisfies the first preset condition in the above-mentioned first clustered connected domain set as the target connected domain to the above-mentioned target connected domain set;
- the second adding module is configured to add the above-mentioned third connected domain
- the first clustered connected domain in a set of clustered connected domains that satisfies the second preset condition is added to the set of connected domains to be spliced as the connected domain to be spliced.
- the above-mentioned clustering subunit further includes: a second clustering module, a third joining module and a second determining module.
- the second clustering module is configured to perform clustering processing on each of the connected domains to be spliced in the set of connected domains to be spliced, to obtain a second set of clustered connected domains;
- the third joining module is configured to add the above-mentioned third connected domain set.
- the second cluster connected domain in the two-cluster connected domain set that satisfies the above-mentioned first preset condition is added to the above-mentioned target connected domain set as a target connected domain; the second determination module is configured to determine the above-mentioned target connected domain set as a picked connected domain. Domain collection.
- the above-mentioned extraction subunit includes: a first generation module, a first selection module, a first conversion module, a plane fitting module, a second selection module, a second conversion module and a second generation module. module.
- the first generation module is configured to use the position information of the above-mentioned source area and the above-mentioned depth image to generate the source area plane equation;
- the first selection module is configured to use the depth information of the pixels in the above-mentioned depth image and the above-mentioned source area.
- Plane equation select the pixels located in the source area in the depth image from the above-mentioned depth image as the target pixels, and obtain the target pixel point set; the first conversion module is configured to convert each pixel point in the above-mentioned target pixel point set.
- the target pixel points are converted into point cloud data to obtain a point cloud data set;
- the plane fitting module is configured to perform plane fitting processing on the above point cloud data set to obtain a fitting plane equation;
- the second selection module is configured to According to the above fitting plane equation, the point cloud data that satisfies the third preset condition is selected from the above point cloud data set as the target point cloud data to obtain the target point cloud data set;
- the second conversion module is configured to use the photograph of the above
- the internal parameters of the camera of the depth image convert the target point cloud data in the above target point cloud data set into a two-dimensional coordinate system to obtain a two-dimensional coordinate set;
- the second generation module is configured to generate the interesting data based on the above two-dimensional coordinate set. area.
- the above-mentioned first clustering module includes a determining sub-module and a splitting sub-module.
- the determining sub-module is configured to determine the number of split clusters based on the ratio of the area of the connected domain to be split and the above-mentioned reference area value;
- the split sub-module is configured to determine the number of split clusters based on the above-mentioned split cluster number. Split the connected domains and perform splitting processing to obtain each first cluster connected domain.
- the above-mentioned first determination sub-unit is configured to determine the mode of the area values of each connected domain in the above-mentioned connected domain segmentation image as the reference area value.
- the first conversion module is configured to determine the minimum circumscribed polygon of the point set corresponding to the two-dimensional coordinate set as the region of interest.
- the units recorded in the device 500 correspond to various steps in the method described with reference to FIG. 2 . Therefore, the operations, features and beneficial effects described above for the method are also applicable to the device 500 and the units included therein, and will not be described again here.
- FIG. 6 a schematic structural diagram of an electronic device 600 suitable for implementing some embodiments of the present disclosure is shown.
- the electronic device shown in FIG. 6 is only an example and should not bring any limitations to the functions and scope of use of the embodiments of the present disclosure.
- the electronic device 600 may include a processing device (eg, central processing unit, graphics processor, etc.) 601, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
- the program in the memory (RAM) 603 executes various appropriate actions and processes.
- various programs and data required for the operation of the electronic device 600 are also stored.
- the processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 604.
- An input/output (I/O) interface 605 is also connected to bus 604.
- I/O interface 605 input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 607 such as a computer; and a communication device 609.
- Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 6 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided. Each block shown in Figure 6 may represent one device, or may represent multiple devices as needed.
- the processes described above with reference to the flowcharts may be implemented as a computer software program.
- some embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602.
- the processing device 601 the above-described functions defined in the methods of some embodiments of the present disclosure are performed.
- the computer-readable medium recorded in some embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having at least one conductor, a portable computer disk, a hard disk, random access memory (RAM), read only memory (ROM), erasable programmable memory Read memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
- Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
- the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communications e.g., communications network
- communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device uses the color image and depth image taken for the source area to generate an edge detection image;
- the above-mentioned edge detection image is subjected to connected domain segmentation processing to obtain a connected domain segmented image, wherein the above-mentioned connected domain segmented image includes at least one connected domain; the connected domains in the above-mentioned connected domain segmented image are classified and processed to obtain a picked connected domain set; determined
- the picking pose information of the items represented by each picking connected domain in the above-mentioned picking connected domain set is used to obtain the picking pose information set.
- Computer program code for performing the operations of some embodiments of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, or a combination thereof, Also included are conventional procedural programming languages—such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider). connected via the Internet).
- LAN local area network
- WAN wide area network
- Internet service provider such as an Internet service provider
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains at least one operable function for implementing the specified logical function.
- Execute instructions may also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
- the units described in some embodiments of the present disclosure may be implemented in software or hardware.
- the described unit may also be provided in a processor.
- a processor includes a generation unit, a segmentation unit, a classification unit and a determination unit.
- the names of these units do not constitute a limitation on the unit itself under certain circumstances.
- the generation unit may also be described as an "edge detection image generation unit.”
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs Systems on Chips
- CPLD Complex Programmable Logical device
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
拾取位姿信息确定方法、装置、设备和计算机可读介质。方法包括:利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像(201);对边缘检测图像进行连通域分割处理,得到连通域分割图像(202),其中,连通域分割图像包括至少一个连通域;对连通域分割图像中的连通域进行分类处理,得到拾取连通域集合(203);确定拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合(204)。
Description
相关申请的交叉引用
本申请要求于申请日为2022年03月08日提交的,申请号为202210220879.5、发明名称为“拾取位姿信息确定方法、装置、设备和计算机可读介质”的中国专利申请的优先权,其全部内容作为整体并入本申请中。
本公开的实施例涉及计算机技术领域,具体涉及拾取位姿信息确定方法、装置、设备和计算机可读介质。
自动拆码垛是指自动拆码垛设备在视觉引导的基础上,根据接收到的拆垛和码垛任务,对来源区域中的物品进行目标检测,进而把来源区域中内相应数量的物品拾取出,并放入指定目的区域的一项技术。目前,往往利用数学解算方法、传统计算机视觉算法或深度学习算法对来源区域中的物品进行目标检测。
发明内容
本公开的内容部分用于以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。本公开的内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开的一些实施例提出了拾取位姿信息确定方法、装置、设备和计算机可读介质,来解决以上背景技术部分提到的技术问题中的一项或多项。
第一方面,本公开的一些实施例提供了一种拾取位姿信息确定方 法,该方法包括:利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;对上述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,上述连通域分割图像包括至少一个连通域;对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定上述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
第二方面,本公开的一些实施例提供了一种拾取位姿信息确定装置,装置包括:生成单元,被配置成利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;分割单元,被配置成对上述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,上述连通域分割图像包括至少一个连通域;分类单元,被配置成对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定单元,被配置成确定上述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
第三方面,本公开的一些实施例提供了一种电子设备,包括:至少一个处理器;存储装置,其上存储有至少一个程序,当至少一个程序被至少一个处理器执行,使得至少一个处理器实现上述第一方面任一实现方式所描述的方法。
第四方面,本公开的一些实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现上述第一方面任一实现方式所描述的方法。
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,元件和元素不一定按照比例绘制。
图1是本公开的一些实施例的拾取位姿信息确定方法的一个应用场景的示意图;
图2是根据本公开的拾取位姿信息确定方法的一些实施例的流程 图;
图3是根据本公开的拾取位姿信息确定方法的另一些实施例的流程图;
图4是根据本公开的拾取位姿信息确定方法的另一些实施例中的确定拾取连通域集合的示意图;
图5是本公开的拾取位姿信息确定装置的一些实施例的结构示意图;
图6是适于用来实现本公开的一些实施例的电子设备的结构示意图。
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。相反,提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“至少一个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关的拾取位姿信息确定方法,例如,利用数学解算方法、传统 计算机视觉算法或深度学习算法对来源区域中的物品进行目标检测等经常会存在如下技术问题:数学解算方法和传统计算机视觉算法依赖于物品的先验信息,而物品更新换代快,需要频繁的进行先验信息的采集,且先验信息的采集耗时较长,降低了自动拆码垛方法的适应性。深度学习方法的目标检测精确度依赖于模型训练的数据量,但是在自动拆码垛场景中,难以进行大规模的数据采集,图像中物品的纹理以及阴影也会影响目标检测的精确度,导致拾取位姿估算不精确,降低了自动拆码垛的成功率。
为了解决以上所阐述的问题,本公开的一些实施例提出了拾取位姿信息确定方法及装置,可以避免使用物品的先验信息,并提高目标检测的准确度,进而,可以更为精确的确定拾取位姿信息,提高自动拆码垛的成功率。
下面将参考附图并结合实施例来详细说明本公开。
图1是本公开的一些实施例的拾取位姿信息确定方法的一个应用场景的示意图。
在图1的应用场景中,首先,计算设备101可以利用针对来源区域102拍摄的彩色图像103和深度图像104,生成边缘检测图像105。接着,计算设备101可以对上述边缘检测图像105进行连通域分割处理,得到连通域分割图像106,其中,上述连通域分割图像106包括至少一个连通域。然后,计算设备101可以对上述连通域分割图像106中的连通域进行分类处理,得到拾取连通域集合107。最后,计算设备101可以确定上述拾取连通域集合107中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合108。
需要说明的是,上述计算设备101可以是硬件,也可以是软件。当计算设备为硬件时,可以实现成多个服务器或终端设备组成的分布式集群,也可以实现成单个服务器或单个终端设备。当计算设备体现为软件时,可以安装在上述所列举的硬件设备中。其可以实现成例如用来提供分布式服务的多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
应该理解,图1中的计算设备的数目仅仅是示意性的。根据实现 需要,可以具有任意数目的计算设备。
继续参考图2,示出了根据本公开的拾取位姿信息确定方法的一些实施例的流程200。该拾取位姿信息确定方法,包括以下步骤:
步骤201,利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像。
在一些实施例中,上述彩色图像和上述深度图像可以是同时拍摄的图像。上述彩色图像可以是由普通的二维相机拍摄的RGB(红绿蓝)图像等彩色图像。上述深度图像可以由三维相机拍摄,也可以由上述二维相机拍摄的图像转换得到。上述来源区域可以是放置有需要移动或转移的物品的区域。拾取位姿信息确定方法的执行主体(如图1所示的计算设备101)利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像,可以包括以下步骤:
第一步,将上述彩色图像转换至YCbCr颜色空间中,得到YCbCr图像。
第二步,利用上述YCbCr图像的亮度分量Y、颜色分量Cb和Cr,生成Y梯度图像、Cb梯度图像和Cr梯度图像。其中,可以利用8方向圆形边缘检测算子生成各个梯度图像。
第三步,利用上述深度图像,生成深度梯度图。其中,可以对上述深度图像中各个像素点的深度值进行归一化处理,利用归一化后的深度值生成深度梯度图。可以利用上述8方向圆形边缘检测算子生成上述深度梯度图。
第四步,对上述Y梯度图像、上述Cb梯度图像、上述Cr梯度图像和上述深度梯度图进行融合处理,得到目标梯度图。其中,可以通过线性加权方式对各个梯度图像进行融合。
第五步,将上述目标梯度图的8方向梯度中的最大值作为边缘检测结果,得到边缘检测图像。
步骤202,对边缘检测图像进行连通域分割处理,得到连通域分割图像。
在一些实施例中,上述执行主体可以对上述边缘检测图像进行连 通域分割处理,得到连通域分割图像。其中,上述连通域分割图像包括至少一个连通域。可以利用连通域分割算法对上述边缘检测图像进行连通域分割处理。上述连通域分割算法包括但不限于以下至少一项Two-Pass法(两遍扫描法)、Seed-Filling(种子填充)法等。
步骤203,对连通域分割图像中的连通域进行分类处理,得到拾取连通域集合。
在一些实施例中,上述执行主体对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合,可以包括以下步骤:
第一步,对上述连通域分割图像中各个连通域的边缘进行图像膨胀处理,得到膨胀连通域分割图像。
第二步,将上述膨胀连通域分割图像中实际面积与预设面积值的比值满足预设条件的膨胀连通域确定为拾取连通域,得到拾取连通域集合。其中,上述预设条件可以是比值在预设范围内。实践中,上述预设范围可以根据实际应用情况进行设置。例如,上述预设范围可以是[0.8,1.2]。上述预设面积值可以是上述来源区域中每个物品的实际面积值。
上述膨胀连通域的实际面积可以通过以下子步骤确定:
第一子步骤,将上述深度图像中与上述膨胀连通域对应的区域内的各个像素的深度值的平均值,确定为上述膨胀连通域所表征的物品与拍摄上述深度图像的相机之间的距离。
第二子步骤,利用上述距离和拍摄上述深度图像的相机的内参,将上述膨胀连通域在图像中的面积转换为实际面积。
步骤204,确定拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
在一些实施例中,上述执行主体确定上述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,可以包括以下步骤:
第一步,利用拍摄上述彩色图像的相机的内参和外参,将上述拾取连通域内的各个二维坐标转换为三维坐标,得到三维坐标集合。
第二步,对上述三维坐标集合进行平面拟合处理,得到物品拟合平面方程。
第三步,利用上述拍摄上述彩色图像的相机的内参和外参,将上述拾取连通域的二维中心点坐标转换为三维中心点坐标。
第四步,将上述物品拟合平面方程的法向量和上述三维中心点坐标作为拾取位姿信息。
在一些实施例的一些可选的实现方式中,上述执行主体还可以将上述拾取位姿信息集合发送至自动拆码垛设备。
本公开的上述各个实施例具有如下有益效果:通过本公开的一些实施例的拾取位姿信息确定方法,可以避免使用物品的先验信息,并提高目标检测的准确度,进而,可以更为精确的确定拾取位姿信息,提高自动拆码垛的成功率。具体来说,造成目标检测的准确度较低,难以较为精确的确定拾取位姿信息的原因在于:数学解算方法和传统计算机视觉算法依赖于物品的先验信息,深度学习方法的目标检测精确度依赖于模型训练的数据量。基于此,本公开的一些实施例的拾取位姿信息确定方法,利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像。由此,得到初步的边缘检测结果。接着,对边缘检测图像中的连通域进行分类处理,得到拾取连通域集合。由此,在边缘检测图像的基础之上进行处理,将得到的拾取连通域集合中的拾取连通域作为最终较为精确的目标检测结果。进而,使得根据拾取连通域集合中的拾取连通域生成的拾取位姿信息也具有较高的精确性。由于在上述拾取位姿信息确定的过程中,未使用物品的先验信息。从而,避免了使用物品的先验信息所带来的频繁采集先验信息及采集时间较长所导致的自动拆码垛方法的适应性较差的这一技术问题。同时,在得到的目标检测结果—边缘检测图像的基础之上,对其中的连通域进行分类处理,进而,提升目标检测结果的准确性。从而,可以更为精确的确定拾取位姿信息,提高自动拆码垛的成功率。
参考图3,其示出了拾取位姿信息确定方法的另一些实施例的流程300。该拾取位姿信息确定方法的流程300,包括以下步骤:
步骤301,利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像。
在一些实施例中,上述彩色图像和上述深度图像可以是同时拍摄的图像。上述彩色图像可以是普通的二维相机所拍摄的RGB图像等。上述深度图像可以由三维相机拍摄,也可以由上述二维相机拍摄的图像转换得到。上述来源区域可以是放置有需要移动或转移的物品的区域。拾取位姿信息确定方法的执行主体(如图1所示的计算设备101)利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像,可以包括以下步骤:
步骤3011,提取深度图像中的感兴趣区域。
在一些实施例中,上述执行主体提取上述深度图像中的感兴趣区域,可以包括以下步骤:
第一步,利用上述来源区域的位置信息和上述深度图像,生成来源区域平面方程。其中,上述来源区域位置信息可以包括来源区域各个角点的三维坐标。上述来源区域位置信息可以是预先采集并存储的。利用上述来源区域各个角点的三维坐标可以确定上述来源区域的范围。可以利用上述来源区域各个角点的三维坐标和深度图像的深度信息生成来源区域平面方程。
第二步,利用上述深度图像中像素点的深度信息和上述来源区域平面方程,从上述深度图像中选择出位于上述来源区域内的像素点作为目标像素点,得到目标像素点集合。其中,可以根据上述深度图像中每个像素点的深度信息,确定该像素点与上述来源区域平面方程之间的距离,若距离为非负数,则可以确定该像素点位于上述来源区域内。
第三步,将上述目标像素点集合中的每个目标像素点转换为点云数据,得到点云数据集合。
第四步,对上述点云数据集合进行平面拟合处理,得到拟合平面方程。
第五步,根据上述拟合平面方程,从上述点云数据集合中选择出满足第三预设条件的点云数据作为目标点云数据,得到目标点云数据集合。其中,上述第三预设条件可以是点云数据所表征的点与上述拟合平面方程之间的距离大于目标距离值。上述目标距离值可以是点云 数据集合中各个点云数据所表征的点与上述拟合平面方程之间的距离的算术平均值。
第六步,利用拍摄上述深度图像的相机的内参将上述目标点云数据集合中的目标点云数据转换至二维坐标系下,得到二维坐标集合。
第七步,根据上述二维坐标集合生成感兴趣区域。
可选的,上述执行主体可以将上述二维坐标集合对应的点集的最小外接多边形确定为感兴趣区域。
步骤3012,利用感兴趣区域对彩色图像进行掩膜处理,得到掩膜图像。
上述掩膜处理可以是将上述彩色图像中位于上述感兴趣区域外的像素的像素值设置为预设数值。实践中,上述预设数值可以根据实际情况进行设置,此处不做限定。例如,上述预设数值可以是0。
步骤3013,对掩膜图像进行边缘检测处理,得到边缘检测图像。
可以利用边缘检测算子对上述掩膜图像进行边缘检测处理。其中,上述边缘检测算子可以包括但不限于:Sobel(索贝尔)算子、Prewitt(蒲瑞维特)算子和Roberts(罗伯特)算子。
步骤302,对边缘检测图像进行连通域分割处理,得到连通域分割图像。
在一些实施例中,步骤302的具体实现方式及所带来的技术效果可以参考图2对应的那些实施例中的步骤202,在此不再赘述。
步骤303,对连通域分割图像中的连通域进行分类处理,得到拾取连通域集合。
在一些实施例中,上述执行主体对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合,可以包括以下步骤:
步骤3031,根据连通域分割图像中各个连通域的面积值,确定参照面积值。
可以将上述连通域分割图像中各个连通域的面积值中的众数确定为参照面积值。
步骤3032,利用参照面积值,对连通域分割图像中的各个连通域进行分类,得到目标连通域集合、待拆分连通域集合和待拼接连通域 集合。
首先,可以确定连通域分割图像中每个连通域的面积值与上述参照面积值的比值,得到面积比值集合。然后,从上述连通域分割图像中选择出面积比值大于等于第一面积值且小于第二面积值的连通域作为待拼接连通域,得到待拼接连通域集合。接着,从上述连通域分割图像中选择出面积比值大于等于第二面积值且小于第三面积值的连通域作为目标连通域,得到目标连通域集合。最后,从上述连通域分割图像中选择出面积比值大于等于第三面积值且小于第四面积值的连通域作为待拆分连通域,得到待拆分连通域集合。
实践中,上述第一面积值、第二面积值和第三面积值可以根据实际应用进行设置,此处不做限定。例如,上述第一面积值可以是0.2,上述第二面积值可以是0.8,上述第三面积值可以是1.2,上述第三面积值可以是10。
步骤3033,对目标连通域集合、待拆分连通域集合和待拼接连通域集合进行聚类处理,得到拾取连通域集合。
在一些实施例中,上述执行主体对上述目标连通域集合、上述待拆分连通域集合和上述待拼接连通域集合进行聚类处理,得到拾取连通域集合,可以包括以下步骤:
第一步,基于上述参照面积值对上述待拆分连通域集合中的每个待拆分连通域进行聚类处理,得到第一聚类连通域集合。其中,可以利用聚类算法对上述待拆分连通域进行聚类处理。上述聚类算法可以包括但不限于K-Means(K均值)聚类算法、RCF(Richer Convolutional Features,丰富卷积特征)网络等。
可选的,上述执行主体基于上述参照面积值对上述待拆分连通域集合中的每个待拆分连通域进行聚类处理,得到第一聚类连通域集合,可以包括以下子步骤:
第一子步骤,根据上述待拆分连通域的面积和上述参照面积值的比值,确定拆分聚类数。其中,可以对上述比值进行四舍五入处理,得到拆分聚类数。
第二子步骤,基于上述拆分聚类数对上述待拆分连通域进行拆分 处理,得到各个第一聚类连通域。其中,可以利用上述聚类算法对上述待拆分连通域进行拆分处理。
第二步,将上述第一聚类连通域集合中满足第一预设条件的第一聚类连通域作为目标连通域加入上述目标连通域集合。其中,上述第一预设条件可以是第一聚类连通域的面积与上述参照面积值的比值大于等于上述第二面积值且小于上述第三面积值。
第三步,将上述第一聚类连通域集合中满足第二预设条件的第一聚类连通域作为待拼接连通域加入上述待拼接连通域集合。其中,上述第二预设条件可以是第一聚类连通域的面积与上述参照面积值的比值大于等于上述第一面积值且小于上述第二面积值。
可选的,上述执行主体对上述目标连通域集合、上述待拆分连通域集合和上述待拼接连通域集合进行聚类处理,得到拾取连通域集合,还可以包括以下步骤:
第一步,对上述待拼接连通域集合中的各个待拼接连通域进行聚类处理,得到第二聚类连通域集合。其中,可以利用上述聚类算法对上述待拼接连通域集合中的各个待拼接连通域进行聚类处理。
第二步,将上述第二聚类连通域集合中满足上述第一预设条件的第二聚类连通域作为目标连通域加入上述目标连通域集合。
第三步,将上述目标连通域集合确定为拾取连通域集合。
作为示例,参考图4,首先,可以对上述待拼接连通域集合401中的各个待拼接连通域进行聚类处理,得到第二聚类连通域集合402。然后,可以将上述第二聚类连通域集合402中满足上述第一预设条件403的第二聚类连通域作为目标连通域加入上述目标连通域集合404。最后,可以将上述目标连通域集合404确定为拾取连通域集合405。
由此,可以通过聚类处理对边缘检测图像中的连通域进行划分和处理,从而实现更为精确的目标识别。
步骤304,确定拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
在一些实施例中,步骤304的具体实现方式及所带来的技术效果可以参考图2对应的那些实施例中的步骤204,在此不再赘述。
从图3中可以看出,与图2对应的一些实施例的描述相比,图3对应的一些实施例中的拾取位姿信息确定方法的流程300体现了利用聚类算法对初步边缘检测结果—边缘检测图像进行聚类处理。由此,这些实施例描述的方案可以通过聚类处理对边缘检测图像中的连通域进行更为准确的划分。从而,便于根据准确划分的连通域确定拾取位姿。
参考图5,作为对上述各图所示方法的实现,本公开提供了一种拾取位姿信息确定装置的一些实施例,这些装置实施例与图2所示的那些方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,一些实施例的拾取位姿信息确定装置500包括:生成单元501、分割单元502、分类单元503和确定单元504。其中,生成单元501,被配置成利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;分割单元502,被配置成对上述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,上述连通域分割图像包括至少一个连通域;分类单元503,被配置成对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定单元504,被配置成确定上述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
在一些实施例的可选实现方式中,上述拾取位姿信息确定装置500还包括发送单元,被配置成将上述拾取位姿信息集合发送至自动拆码垛设备。
在一些实施例的可选实现方式中,上述生成单元501包括:提取子单元、掩膜子单元和边缘检测子单元。其中,提取子单元,被配置成提取上述深度图像中的感兴趣区域;掩膜子单元,被配置成利用上述感兴趣区域对上述彩色图像进行掩膜处理,得到掩膜图像;边缘检测子单元,被配置成对上述掩膜图像进行边缘检测处理,得到边缘检测图像。
在一些实施例的可选实现方式中,上述分类单元503包括第一确定子单元和分类子单元。其中,第一确定子单元,被配置成根据上述 连通域分割图像中各个连通域的面积值,确定参照面积值;分类子单元,被配置成利用上述参照面积值,对上述连通域分割图像中的各个连通域进行分类,得到目标连通域集合、待拆分连通域集合和待拼接连通域集合;聚类子单元,被配置成对上述目标连通域集合、上述待拆分连通域集合和上述待拼接连通域集合进行聚类处理,得到拾取连通域集合。
在一些实施例的可选实现方式中,上述聚类子单元还包括:第一聚类模块、第一加入模块和第二加入模块。其中,第一聚类模块,被配置成基于上述参照面积值对上述待拆分连通域集合中的每个待拆分连通域进行聚类处理,得到第一聚类连通域集合;第一加入模块,被配置成将上述第一聚类连通域集合中满足第一预设条件的第一聚类连通域作为目标连通域加入上述目标连通域集合;第二加入模块,被配置成将上述第一聚类连通域集合中满足第二预设条件的第一聚类连通域作为待拼接连通域加入上述待拼接连通域集合。
在一些实施例的可选实现方式中,上述聚类子单元还包括:第二聚类模块、第三加入模块和第二确定模块。其中,第二聚类模块,被配置成对上述待拼接连通域集合中的各个待拼接连通域进行聚类处理,得到第二聚类连通域集合;第三加入模块,被配置成将上述第二聚类连通域集合中满足上述第一预设条件的第二聚类连通域作为目标连通域加入上述目标连通域集合;第二确定模块,被配置成将上述目标连通域集合确定为拾取连通域集合。
在一些实施例的可选实现方式中,上述提取子单元包括:第一生成模块、第一选择模块、第一转换模块、平面拟合模块、第二选择模块、第二转换模块和第二生成模块。其中,第一生成模块,被配置成利用上述来源区域的位置信息和上述深度图像,生成来源区域平面方程;第一选择模块,被配置成利用上述深度图像中像素点的深度信息和上述来源区域平面方程,从上述深度图像中选择出位于上述深度图像中上述来源区域内的像素点作为目标像素点,得到目标像素点集合;第一转换模块,被配置成将上述目标像素点集合中的每个目标像素点转换为点云数据,得到点云数据集合;平面拟合模块,被配置成对上 述点云数据集合进行平面拟合处理,得到拟合平面方程;第二选择模块,被配置成根据上述拟合平面方程,从上述点云数据集合中选择出满足第三预设条件的点云数据作为目标点云数据,得到目标点云数据集合;第二转换模块,被配置成利用拍摄上述深度图像的相机的内参将上述目标点云数据集合中的目标点云数据转换至二维坐标系下,得到二维坐标集合;第二生成模块,被配置成根据上述二维坐标集合生成感兴趣区域。
在一些实施例的可选实现方式中,上述第一聚类模块包括确定子模块和拆分子模块。其中,确定子模块,被配置成根据上述待拆分连通域的面积和上述参照面积值的比值,确定拆分聚类数;拆分子模块,被配置成基于上述拆分聚类数对上述待拆分连通域进行拆分处理,得到各个第一聚类连通域。
在一些实施例的可选实现方式中,上述第一确定子单元,被配置成将上述连通域分割图像中各个连通域的面积值中的众数确定为参照面积值。
在一些实施例的可选实现方式中,上述第一转换模块,被配置成将上述二维坐标集合对应的点集的最小外接多边形确定为感兴趣区域。
可以理解的是,该装置500中记载的诸单元与参考图2描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作、特征以及产生的有益效果同样适用于装置500及其中包含的单元,在此不再赘述。
下面参考图6,其示出了适于用来实现本公开的一些实施例的电子设备600的结构示意图。图6示出的电子设备仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备 600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。
特别地,根据本公开的一些实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的一些实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的一些实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的一些实施例的方法中限定的上述功能。
需要说明的是,本公开的一些实施例中记载的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的一些实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。 而在本公开的一些实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;对上述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,上述连通域分割图像包括至少一个连通域;对上述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定上述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的一些实施例的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一 个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开的一些实施例中的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括生成单元、分割单元、分类单元和确定单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,生成单元还可以被描述为“边缘检测图像生成单元”。
本文中以上描述的功能可以至少部分地由至少一个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
Claims (13)
- 一种拾取位姿信息确定方法,包括:利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;对所述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,所述连通域分割图像包括至少一个连通域;对所述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定所述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
- 根据权利要求1所述的方法,其中,所述方法还包括:将所述拾取位姿信息集合发送至自动拆码垛设备。
- 根据权利要求1或2所述的方法,其中,所述利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像,包括:提取所述深度图像中的感兴趣区域;利用所述感兴趣区域对所述彩色图像进行掩膜处理,得到掩膜图像;对所述掩膜图像进行边缘检测处理,得到边缘检测图像。
- 根据权利要求1-3之一所述的方法,其中,所述对所述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合,包括:根据所述连通域分割图像中各个连通域的面积值,确定参照面积值;利用所述参照面积值,对所述连通域分割图像中的各个连通域进行分类,得到目标连通域集合、待拆分连通域集合和待拼接连通域集合;对所述目标连通域集合、所述待拆分连通域集合和所述待拼接连 通域集合进行聚类处理,得到拾取连通域集合。
- 根据权利要求4所述的方法,其中,所述对所述目标连通域集合、所述待拆分连通域集合和所述待拼接连通域集合进行聚类处理,得到拾取连通域集合,包括:基于所述参照面积值对所述待拆分连通域集合中的每个待拆分连通域进行聚类处理,得到第一聚类连通域集合;将所述第一聚类连通域集合中满足第一预设条件的第一聚类连通域作为目标连通域加入所述目标连通域集合;将所述第一聚类连通域集合中满足第二预设条件的第一聚类连通域作为待拼接连通域加入所述待拼接连通域集合。
- 根据权利要求5所述的方法,其中,所述对所述目标连通域集合、所述待拆分连通域集合和所述待拼接连通域集合进行聚类处理,得到拾取连通域集合,还包括:对所述待拼接连通域集合中的各个待拼接连通域进行聚类处理,得到第二聚类连通域集合;将所述第二聚类连通域集合中满足所述第一预设条件的第二聚类连通域作为目标连通域加入所述目标连通域集合;将所述目标连通域集合确定为拾取连通域集合。
- 根据权利要求3所述的方法,其中,所述提取所述深度图像中的感兴趣区域,包括:利用所述来源区域的位置信息和所述深度图像,生成来源区域平面方程;利用所述深度图像中像素点的深度信息和所述来源区域平面方程,从所述深度图像中选择出位于所述深度图像中所述来源区域内的像素点作为目标像素点,得到目标像素点集合;将所述目标像素点集合中的每个目标像素点转换为点云数据,得到点云数据集合;对所述点云数据集合进行平面拟合处理,得到拟合平面方程;根据所述拟合平面方程,从所述点云数据集合中选择出满足第三预设条件的点云数据作为目标点云数据,得到目标点云数据集合;利用拍摄所述深度图像的相机的内参将所述目标点云数据集合中的目标点云数据转换至二维坐标系下,得到二维坐标集合;根据所述二维坐标集合生成感兴趣区域。
- 根据权利要求5或6所述的方法,其中,所述对所述待拆分连通域集合中的每个待拆分连通域进行聚类处理,得到第一聚类连通域集合,包括:根据所述待拆分连通域的面积和所述参照面积值的比值,确定拆分聚类数;基于所述拆分聚类数对所述待拆分连通域进行拆分处理,得到各个第一聚类连通域。
- 根据权利要求4-6之一所述的方法,其中,所述根据所述连通域分割图像中各个连通域的面积值,确定参照面积值,包括:将所述连通域分割图像中各个连通域的面积值中的众数确定为参照面积值。
- 根据权利要求7所述的方法,其中,所述根据所述二维坐标集合生成感兴趣区域,包括:将所述二维坐标集合对应的点集的最小外接多边形确定为感兴趣区域。
- 一种拾取位姿信息确定装置,包括:生成单元,被配置成利用针对来源区域拍摄的彩色图像和深度图像,生成边缘检测图像;分割单元,被配置成对所述边缘检测图像进行连通域分割处理,得到连通域分割图像,其中,所述连通域分割图像包括至少一个连通 域;分类单元,被配置成对所述连通域分割图像中的连通域进行分类处理,得到拾取连通域集合;确定单元,被配置成确定所述拾取连通域集合中每个拾取连通域所表征的物品的拾取位姿信息,得到拾取位姿信息集合。
- 一种电子设备,包括:至少一个处理器;存储装置,其上存储有至少一个程序,当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-10中任一所述的方法。
- 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-10中任一所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210220879.5 | 2022-03-08 | ||
CN202210220879.5A CN114638846A (zh) | 2022-03-08 | 2022-03-08 | 拾取位姿信息确定方法、装置、设备和计算机可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023168955A1 true WO2023168955A1 (zh) | 2023-09-14 |
Family
ID=81947266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/128549 WO2023168955A1 (zh) | 2022-03-08 | 2022-10-31 | 拾取位姿信息确定方法、装置、设备和计算机可读介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114638846A (zh) |
WO (1) | WO2023168955A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229772A (zh) * | 2024-05-24 | 2024-06-21 | 杭州士腾科技有限公司 | 基于图像处理的托盘位姿检测方法、系统、设备和介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638846A (zh) * | 2022-03-08 | 2022-06-17 | 北京京东乾石科技有限公司 | 拾取位姿信息确定方法、装置、设备和计算机可读介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308736A1 (en) * | 2014-10-28 | 2017-10-26 | Hewlett-Packard Development Company, L.P. | Three dimensional object recognition |
CN107945192A (zh) * | 2017-12-14 | 2018-04-20 | 北京信息科技大学 | 一种托盘纸箱垛型实时检测方法 |
CN110175999A (zh) * | 2019-05-30 | 2019-08-27 | 广东工业大学 | 一种位姿检测方法、系统及装置 |
CN111507390A (zh) * | 2020-04-11 | 2020-08-07 | 华中科技大学 | 一种基于轮廓特征的仓储箱体识别与定位方法 |
CN113284178A (zh) * | 2021-06-11 | 2021-08-20 | 梅卡曼德(北京)机器人科技有限公司 | 物体码垛方法、装置、计算设备及计算机存储介质 |
CN113345023A (zh) * | 2021-07-05 | 2021-09-03 | 北京京东乾石科技有限公司 | 箱体的定位方法、装置、介质和电子设备 |
CN113688704A (zh) * | 2021-08-13 | 2021-11-23 | 北京京东乾石科技有限公司 | 物品拣选方法、装置、电子设备和计算机可读介质 |
CN114638846A (zh) * | 2022-03-08 | 2022-06-17 | 北京京东乾石科技有限公司 | 拾取位姿信息确定方法、装置、设备和计算机可读介质 |
-
2022
- 2022-03-08 CN CN202210220879.5A patent/CN114638846A/zh active Pending
- 2022-10-31 WO PCT/CN2022/128549 patent/WO2023168955A1/zh unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308736A1 (en) * | 2014-10-28 | 2017-10-26 | Hewlett-Packard Development Company, L.P. | Three dimensional object recognition |
CN107945192A (zh) * | 2017-12-14 | 2018-04-20 | 北京信息科技大学 | 一种托盘纸箱垛型实时检测方法 |
CN110175999A (zh) * | 2019-05-30 | 2019-08-27 | 广东工业大学 | 一种位姿检测方法、系统及装置 |
CN111507390A (zh) * | 2020-04-11 | 2020-08-07 | 华中科技大学 | 一种基于轮廓特征的仓储箱体识别与定位方法 |
CN113284178A (zh) * | 2021-06-11 | 2021-08-20 | 梅卡曼德(北京)机器人科技有限公司 | 物体码垛方法、装置、计算设备及计算机存储介质 |
CN113345023A (zh) * | 2021-07-05 | 2021-09-03 | 北京京东乾石科技有限公司 | 箱体的定位方法、装置、介质和电子设备 |
CN113688704A (zh) * | 2021-08-13 | 2021-11-23 | 北京京东乾石科技有限公司 | 物品拣选方法、装置、电子设备和计算机可读介质 |
CN114638846A (zh) * | 2022-03-08 | 2022-06-17 | 北京京东乾石科技有限公司 | 拾取位姿信息确定方法、装置、设备和计算机可读介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229772A (zh) * | 2024-05-24 | 2024-06-21 | 杭州士腾科技有限公司 | 基于图像处理的托盘位姿检测方法、系统、设备和介质 |
Also Published As
Publication number | Publication date |
---|---|
CN114638846A (zh) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023168955A1 (zh) | 拾取位姿信息确定方法、装置、设备和计算机可读介质 | |
CN110544258B (zh) | 图像分割的方法、装置、电子设备和存储介质 | |
US20230394669A1 (en) | Point cloud segmentation method and apparatus, device, and storage medium | |
US20240202893A1 (en) | Method and device for detecting defect, storage medium and electronic device | |
CN112733820B (zh) | 障碍物信息生成方法、装置、电子设备和计算机可读介质 | |
WO2022041830A1 (zh) | 行人重识别方法和装置 | |
WO2023143178A1 (zh) | 对象分割方法、装置、设备及存储介质 | |
US20200410213A1 (en) | Method and apparatus for processing mouth image | |
CN111414879A (zh) | 人脸遮挡程度识别方法、装置、电子设备及可读存储介质 | |
CN114399588B (zh) | 三维车道线生成方法、装置、电子设备和计算机可读介质 | |
WO2023083152A1 (zh) | 图像分割方法、装置、设备及存储介质 | |
CN112597788B (zh) | 目标测定方法、装置、电子设备和计算机可读介质 | |
WO2023179310A1 (zh) | 图像修复方法、装置、设备、介质及产品 | |
WO2021164328A1 (zh) | 图像生成方法、设备及存储介质 | |
WO2023103653A1 (zh) | 键值匹配方法、装置、可读介质及电子设备 | |
WO2023138558A1 (zh) | 一种图像场景分割方法、装置、设备及存储介质 | |
CN108492284B (zh) | 用于确定图像的透视形状的方法和装置 | |
CN112101258A (zh) | 图像处理方法、装置、电子设备和计算机可读介质 | |
CN115272182A (zh) | 车道线检测方法、装置、电子设备和计算机可读介质 | |
CN114119990B (zh) | 用于图像特征点匹配的方法、装置及计算机程序产品 | |
WO2022028253A1 (zh) | 定位模型优化方法、定位方法和定位设备以及存储介质 | |
CN113902932A (zh) | 特征提取方法、视觉定位方法及装置、介质和电子设备 | |
WO2023061195A1 (zh) | 图像获取模型的训练方法、图像检测方法、装置及设备 | |
WO2022052889A1 (zh) | 图像识别方法、装置、电子设备和计算机可读介质 | |
CN111784709B (zh) | 图像处理方法、装置、电子设备和计算机可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22930578 Country of ref document: EP Kind code of ref document: A1 |