US11878433B2 - Method for detecting grasping position of robot in grasping object - Google Patents
Method for detecting grasping position of robot in grasping object Download PDFInfo
- Publication number
- US11878433B2 US11878433B2 US17/032,399 US202017032399A US11878433B2 US 11878433 B2 US11878433 B2 US 11878433B2 US 202017032399 A US202017032399 A US 202017032399A US 11878433 B2 US11878433 B2 US 11878433B2
- Authority
- US
- United States
- Prior art keywords
- target object
- grasping position
- network
- target
- pixel region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1669—Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39484—Locate, reach and grasp, visual guided grasping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- Embodiments of the present application relate to the technical field of autonomous grasping for robots, and in particular, relate to a method and apparatus for detecting a grasping position of a robot in grasping a target object, and a computing device and a computer-readable storage medium thereof.
- the present application is proposed to provide a method and apparatus for detecting grasping positions of robots in grasping target objects, and a computing device and a computer-readable storage medium thereof, which overcome the above problems or at least partially solve the above problems.
- a solution adopted in an embodiment of the present application provides a method for detecting grasping positions of robot in grasping target object, comprising:
- the target object segmentation network is a network trained based on a convolutional neural network model, and the target object segmentation network is trained by:
- the optimal grasping position generation network is a network trained based on a convolutional neural network model, and the optimal grasping position generation network is trained by:
- the grasping position quality evaluation network is a network trained base on a convolutional neural network model, and the grasping position quality evaluation network is trained by:
- a solution adopted in another embodiment of the present application provides an apparatus for detecting grasping position of robot in grasping target object, comprising:
- the target object segmentation network in the segmenting module is a network trained based on a convolutional neural network model, and the target object segmentation network is trained by:
- the optimal grasping position generation network in the grasping module is a network trained based on a convolutional neural network model, and the optimal grasping position generation network is trained by:
- the grasping position quality evaluation network in the evaluating module is a network trained based on a convolutional neural network model, and the grasping position quality evaluation network is trained by:
- a solution adopted in yet another embodiment of the present application provides a computing device, comprising: a processor, a memory, a communication interface and a communication bus; wherein the processor, the memory and the communication bus communicate with each other via the communication bus; and
- a solution adopted in still another embodiment of the present application provides a computer-readable storage medium, the storage medium storing at least one executable instruction; wherein the at least one executable instruction, when being executed, causes the processor to perform the operations corresponding the method for detecting the grasping position of the robot in grasping the target object.
- the beneficial effects of the embodiments of the present application over prior art lie in that, in the embodiments of the present application, the pixel region corresponding to the target object is obtained by the target object segmentation network, the pixel region corresponding to the target object is input to the optimal grasping position generation network to obtain the optimal grasping position for grasping the target object, the score of the optimal grasping position is calculated by the grasping position quality evaluation network, and the optimal grasping position corresponding to the highest score is taken as the global optimal grasping position of the robot.
- the robot could autonomously grasp the target object at the optimal grasping position.
- FIG. 1 is a flowchart of a method for detecting a grasping position of a robot in grasping a target object according to an embodiment of the present application
- FIG. 2 is a flowchart of training a target object segmentation network according to an embodiment of the present application
- FIG. 3 is a flowchart of training an optimal grasping position generation network according to an embodiment of the present application
- FIG. 4 is a flowchart of training a grasping position quality evaluation network according to an embodiment of the present application
- FIG. 5 is a functional diagram of an apparatus for detecting a grasping position of a robot in grasping a target object according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of a computing device according to an embodiment of the present application.
- An embodiment of the present application provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores at least one computer-executable instruction, which may be executed to perform the method for detecting the grasping position of the robot in grasping the target object in any of the above method embodiments.
- FIG. 1 is a schematic flowchart of a method for detecting a grasping position of a robot in grasping a target object according to an embodiment of the present application. As illustrated in FIG. 1 , the method includes the following steps:
- Step S 101 At least one target RGB image and at least one target Depth image of the target object at different view angles are collected, wherein pixel points in the at least one target RGB image have one-to-one corresponding pixel points in the at least one target Depth image.
- the target object is placed on a desk under the robot arm of the robot, and an RGB image and a Depth image at the current position are acquired, wherein pixels in the RGB image have one-to-one corresponding pixels in the Depth image.
- the robot arm is moved to re-collect images from other angles.
- images at eight positions, front, rear, left, right, upper front, upper rear, upper left and upper right are collected.
- Step S 102 Each of the at least one target RGB image is input to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object in the target Depth image.
- each of the at least one target RGB image is input to the target object segmentation network for calculation to obtain the RGB pixel region of the target object in the target RGB image.
- the RGB images one-to-one correspond to the Depth images. Therefore, the Depth pixel region of the target object in the target Depth image may be positioned according to the RGB pixel region of the target object in the target RGB image.
- FIG. 2 is a flowchart of training a target object segmentation network according to an embodiment of the present application. As illustrated in FIG. 2 , the target object segmentation network is trained by the following steps:
- Step S 1021 An RGB image containing the target object is acquired.
- Step S 1022 The RGB image is zoomed to a first predetermined resolution to obtain a first training set.
- each RGB image is zoomed to the first predetermined resolution to accommodate the network structure.
- the first predetermined resolution is a 320 ⁇ 320 pixel.
- Step S 1023 A pixel region corresponding to the target object in the first training set is annotated.
- the pixel region corresponding to the target object is manually annotated.
- the position of the pixel region corresponding to the target object in an image in the training set is annotated by a block.
- Step S 1024 The first training set and the pixel region corresponding to the target object are input to the convolutional neural network model for training to obtain the target object segmentation network.
- the convolutional neural network model is any one of the mainstream convolutional neural network-based models for object segmentation, for example, a segmentation network (SegNet), a deep laboratory network (DeepLab v1, DeepLab v2, DeepLab v3, DeepLab v3++), a pyramid scene parsing network (PSPNet), and an image cascade network (ICNet).
- a segmentation network SegNet
- the pixel region corresponding to the target object is taken as one category
- the pixel region corresponding to the background not containing the target object as one category
- the first training set and the pixel region corresponding to the target object are input to the convolutional neural network model for training.
- the convolutional neural network model has 27 layers, during the training, the pixel region corresponding to the target object is extracted by a convolutional extraction layer, and meanwhile, the image is zoomed to the first predetermined pixel size.
- This process is referred to as encoder.
- Features of the target object upon the classification are reproduced by de-convolutional calculation, and a target size of the pixel region corresponding to the target object is restored by up-sampling.
- This process is referred to as a decoder.
- a probability of each pixel category is calculated by taking outputs of the decoder as inputs of a soft-max classifier, and the pixel region corresponding to the target object is determined according to the probability.
- Step S 1025 An overlap comparison is performed between the RGB pixel region corresponding to the target object obtained by the target object segmentation network and the annotated pixel region corresponding to the target object.
- the image containing the target object is taken as an input of the target object segmentation network to obtain the RGB pixel region obtained by the target object segmentation network, and the overlap comparison is performed between the pixel region obtained by the target object segmentation network and the annotated pixel region corresponding to the target object.
- a result of the overlap comparison is taken as an evaluation metric of the target object segmentation network.
- Step S 1026 A weight of the target object segmentation network is adjusted according to the overlap comparison result.
- the overlap comparison result is compared with a predetermined overlap comparison result threshold. If the overlap comparison result is lower than the predetermined overlap comparison result threshold, the structure and the weight of the neural network are adjusted.
- Step S 103 The RGB pixel region of the target object is input to an optimal grasping position generation network to obtain an optimal grasping position for grasping the target object.
- the optimal grasping position generation network is a network trained based on the convolutional neural network model.
- FIG. 3 is a flowchart of training an optimal grasping position generation network according to an embodiment of the present application. As illustrated in FIG. 3 , the optimal grasping position generation network is trained by the following steps:
- Step S 1031 The RGB pixel region corresponding to the target object obtained by target object segmentation network is zoomed to a second predetermined resolution to obtain a second training set.
- the RGB pixel region corresponding to the target object is zoomed to a second predetermined resolution to accommodate the network structure.
- the second predetermined resolution is a 227 ⁇ 227 pixel.
- Step S 1032 Optimal grasping position coordinates are marked for an image in the second training set.
- (X, Y, ⁇ ) is marked as the grasping position of the target object, wherein (X, Y) denotes a grasping point, ⁇ denotes a grasping angle.
- the grasping angle is defined, and then an optimal grasping position under each grasping angle is marked.
- a grasping angle range [0, 180°] is evenly partitioned into 18 angle values, and the optimal grasping position coordinates under each angle are marked.
- the grasping angle and the optimal grasping position coordinates are annotated for each image in the second training set.
- Step S 1033 The image in the second training set and the corresponding optimal grasping position coordinates are taken as inputs, and the inputs are trained based on the convolutional neural network model to obtain an optimal grasping position generation network.
- the convolutional neural network model is any of the conventional convolutional neural network models.
- an AlexNet model by an AlexNet model, the image in the second training set and the corresponding optimal grasping position (X, Y, ⁇ ) are taken as inputs of the convolutional neural network model, wherein the AlexNet model has seven layers, including five convolutional layers and 2 full-connection layers.
- the optimal grasping position generation network is obtained, a Euclidean distance between a predicted grasping point (Xp, Yp) output by the optimal grasping position generation network and a marker point (X, Y) is calculated, and the weight of the optimal grasping position generation network is adjusted by a Softmax loss function according to the Euclidean distance.
- Step S 104 The Depth pixel region of the target object and the optimal grasping position are input to a grasping position quality evaluation network to calculate a score of the optimal grasping position.
- the grasping position quality evaluation network is a network trained based on the convolutional neural network model.
- FIG. 4 is a flowchart of training a grasping position quality evaluation network according to an embodiment of the present application. As illustrated in FIG. 4 , the grasping position quality evaluation network is trained by the following steps:
- Step S 1041 A Depth image containing the target object is acquired.
- the Depth image refers to a Depth image obtained according to the RGB image, wherein pixels in the Depth image have one-to-one corresponding pixels in the RGB image.
- Step S 1042 The Depth image is zoomed to a third predetermined resolution to obtain a third training set.
- each Depth image is zoomed to the third predetermined resolution to accommodate the network structure.
- the third predetermined resolution is a 32 ⁇ 32 pixel.
- Step S 1043 A pair of grasping positions are randomly acquired on the Depth image in the third training set, and a corresponding score is calculated by a predetermined scoring algorithm.
- Step S 1044 The Depth image, the grasping positions and the score corresponding to the grasping positions are taken as inputs, and the inputs are trained based on the convolutional neural network model to obtain the grasping position quality evaluation network.
- the convolutional neural network includes nine layers, wherein four layers are convolutional layers, one layer is a pool layer and four layers are fully-connected layers.
- the score output by the grasping position quality evaluation network is compared with the score obtained by the predetermined score algorithm in step S 1043 , and then the weight of the grasping position quality evaluation network is adjusted according to the comparison result.
- Step S 105 An optimal grasping position corresponding to a highest score is selected as a global optimal grasping position of the robot.
- the pixel region corresponding to the target object is obtained by the target object segmentation network, the pixel region corresponding to the target object is input to the optimal grasping position generation network to obtain the optimal grasping position for grasping the target object, the score of the optimal grasping position is calculated by the grasping position quality evaluation network, and the optimal grasping position corresponding to the highest score is taken as the global optimal grasping position of the robot.
- the robot could autonomously grasp the target object at the optimal grasping position.
- FIG. 5 is a functional block diagram of an apparatus for detecting a grasping position of a robot in grasping a target object according to an embodiment of the present application.
- the apparatus includes: a collecting module 501 , a segmenting module 502 , a grasping module 503 , an evaluating module 504 and a selecting module 505 .
- the collecting module 501 is configured to collect at least one target RGB image and at least one target Depth image of the target object at different view angles, wherein pixel points in the target RGB image have one-to-one corresponding pixel points in the target Depth image.
- the segmenting module 502 is configured to input each of the at least one target RGB image to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object in the target Depth image.
- the grasping module 503 is configured to input the RGB pixel region of the target object to an optimal grasping position generation network to obtain an optimal grasping position for grasping the target object.
- the evaluating module 504 is configured to input the Depth pixel region of the target object and the optimal grasping position to a grasping position quality evaluation network to calculate a score of the optimal grasping position.
- the selecting module 505 is configured to select an optimal grasping position corresponding to a highest score as a global optimal grasping position of the robot.
- the target object segmentation network in the segmenting module 502 is a network trained based on a convolutional neural network model, and the network is trained by:
- the optimal grasping position generation network in the grasping module 503 is a network trained based on a convolutional neural network model, and the network is trained by:
- the grasping position quality evaluation network in the evaluating module 504 is a network trained based on a convolutional neural network model, and the network is trained by:
- the pixel region corresponding to the target object is obtained by the segmenting module
- the optimal grasping position of the target object is obtained by the grasping module
- the score of the optimal grasping position is calculated by the evaluating module
- the optimal grasping position corresponding to the highest score is taken as the global optimal grasping position of the robot.
- the robot could autonomously grasp the target object at the optimal grasping position.
- FIG. 6 is a schematic structural diagram of a computing device according to an embodiment of the present application.
- the specific embodiments of the present application set no limitation to the practice of the computing device.
- the computing device may include: a processor 602 , a communication interface 604 , a memory 606 and a communication bus 608 .
- the processor 602 , the communication interface 604 and the memory 606 communicate with each other via the communication bus 608 .
- the communication interface 604 is configured to communicate with another device.
- the processor 602 is configured to execute the program 610 , and may specifically perform steps in the embodiments of the method for detecting the grasping position of the robot in grasping the target object.
- the program 610 may include a program code, wherein the program code includes a computer-executable instruction.
- the processor 602 may be a central processing unit (CPU) or an application specific integrated circuit (ASIC), or configured as one or more integrated circuits for implementing the embodiments of the present application.
- the computing device includes one or more processors, which may be the same type of processors, for example, one or more CPUs, or may be different types of processors, for example, one or more CPUs and one or more ASICs.
- the memory 606 is configured to store the program 610 .
- the memory 606 may include a high-speed RAM memory, or may also include a non-volatile memory, for example, at least one magnetic disk memory.
- the program 610 may be specifically configured to cause the processor 602 to perform the following operations:
- the program 610 may be further configured to cause the processor 602 to perform the following operations:
- program 610 may be specifically further configured to cause the processor 602 to perform the following operations:
- program 610 may be specifically further configured to cause the processor 602 to perform the following operations:
- modules in the devices according to the embodiments may be adaptively modified and these modules may be configured in one or more devices different from the embodiments herein.
- Modules or units or components in the embodiments may be combined into a single module or unit or component, and additionally these modules, units or components may be practiced in a plurality of sub-modules, subunits or subcomponents.
- all the features disclosed in this specification including the appended claims, abstract and accompanying drawings
- all the processes or units in such disclosed methods or devices may be combined in any way.
- Embodiments of the individual components of the present application may be implemented in hardware, or in a software module running one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that, in practice, some or all of the functions of some or all of the components in the apparatus for detecting the grasping position of the robot in grasping the target object according to individual embodiments of the present application may be implemented using a microprocessor or a digital signal processor (DSP).
- DSP digital signal processor
- the present application may also be implemented as an apparatus of a device program (e.g., a computer program and a computer program product) for performing a part or all of the method as described herein.
- Such a program implementing the present application may be stored on a computer readable medium, or may be stored in the form of one or more signals. Such a signal may be obtained by downloading it from an Internet website, or provided on a carrier signal, or provided in any other form.
- any reference sign placed between the parentheses shall not be construed as a limitation to a claim.
- the word “comprise” or “include” does not exclude the presence of an element or a step not listed in a claim.
- the word “a” or “an” used before an element does not exclude the presence of a plurality of such elements.
- the present application may be implemented by means of a hardware comprising several distinct elements and by means of a suitably programmed computer. In a unit claim enumerating several devices, several of the devices may be embodied by one and the same hardware item. Use of the words “first”, “second”, “third” and the like does not mean any ordering. Such words may be construed as naming.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
Description
-
- collecting at least one target RGB image and at least one target Depth image of the target object at different view angles, wherein pixel points in the at least one target RGB image have one-to-one corresponding pixel points in the at least one target Depth image;
- inputting each of the at least one target RGB image to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object in the target Depth image;
- inputting the RGB pixel region of the target object to an optimal grasping position generation network to obtain an optimal grasping position for grasping the target object;
- inputting the Depth pixel region of the target object and the optimal grasping position to a grasping position quality evaluation network to calculate a score of the optimal grasping position; and
- selecting an optimal grasping position corresponding to a highest score as a global optimal grasping position for the robot in grasping the target object.
-
- acquiring an RGB image containing the target object;
- zooming the RGB image to a first predetermined resolution to obtain a first training set;
- annotating a pixel region corresponding to the target object in the first training set;
- inputting the first training set and the pixel region corresponding to the target object to the convolutional neural network model for training to obtain the target object segmentation network;
- performing an overlap comparison between the RGB pixel region corresponding to the target object obtained by the target object segmentation network and the annotated pixel region corresponding to the target object; and
- adjusting a weight of the target object segmentation network according to the overlap comparison result.
-
- zooming the RGB pixel region corresponding to the target object obtained by target object segmentation network to a second predetermined resolution to obtain a second training set;
- marking optimal grasping position coordinates for an image in the second training set; and
- taking the image in the second training set and the corresponding optimal grasping position coordinates as inputs, and training the inputs based on the convolutional neural network model to obtain the optimal grasping position generation network.
-
- acquiring a Depth image containing the target object;
- zooming the Depth image to a third predetermined pixel to obtain a third training set;
- randomly acquiring a pair of grasping positions on the Depth image in the third training set, and calculating a corresponding score by a predetermined scoring algorithm; and
- taking the Depth image, the grasping positions and the score corresponding to the grasping positions as inputs, and training the inputs based on the convolutional neural network model to obtain the grasping position quality evaluation network.
-
- a collecting module, configured to collect a target RGB image and a target Depth image of the target object at different view angles, wherein pixel points in the target RGB image have one-to-one corresponding pixel points in the target Depth image;
- a segmenting module, configured to input each of the at least one target RGB image to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object in the target Depth image;
- a grasping module, configured to input the RGB pixel region of the target object to an optimal grasping position generation network to obtain an optimal grasping position for grasping the target object;
- an evaluating module, configured to input the Depth pixel region of the target object and the optimal grasping position to a grasping position quality evaluation network to calculate a score of the optimal grasping position; and
- a selecting module, configured to select an optimal grasping position corresponding to a highest score as a global optimal grasping position of the robot.
-
- acquiring an RGB image containing the target object;
- zooming the RGB image to a first predetermined resolution to obtain a first training set;
- annotating a pixel region corresponding to the target object in the first training set;
- inputting the first training set and the pixel region corresponding to the target object to the convolutional neural network model for training to obtain the target object segmentation network;
- performing an overlap comparison between the RGB pixel region corresponding to the target object obtained by the target object segmentation network and the annotated pixel region corresponding to the target object; and
- adjusting a weight of the target object segmentation network according to the overlap comparison result.
-
- zooming the RGB pixel region corresponding to the target object obtained by target object segmentation network to a second predetermined resolution to obtain a second training set;
- marking optimal grasping position coordinates for an image in the second training set; and
- taking the image in the second training set and the corresponding optimal grasping position coordinates as inputs, and training the inputs by the convolutional neural network model to obtain an optimal grasping position generation network.
-
- acquiring a Depth image containing the target object;
- zooming the Depth image to a third predetermined resolution to obtain a third training set;
- randomly acquiring a pair of grasping positions on the Depth image in the third training set, and calculating a corresponding score by a predetermined scoring algorithm; and
- taking the Depth image, the grasping positions and the score corresponding to the grasping positions as inputs, and training the inputs based on the convolutional neural network model to obtain the grasping position quality evaluation network.
-
- the memory is configured to store at least one executable instruction, wherein the at least one executable instruction causes the processor to perform the operations corresponding the method for detecting the grasping position of the robot in grasping the target object.
-
- acquiring an RGB image containing the target object;
- zooming the RGB image to a first predetermined resolution to obtain a first training set;
- annotating a pixel region corresponding to the target object in the first training set;
- inputting the first training set and the pixel region corresponding to the target object to the convolutional neural network model for training to obtain the target object segmentation network;
- performing an overlap comparison between the RGB pixel region corresponding to the target object obtained by the target object segmentation network and the annotated pixel region corresponding to the target object; and
- adjusting a weight of the target object segmentation network according to the overlap comparison result.
-
- zooming the RGB pixel region corresponding to the target object obtained by target object segmentation network to a second predetermined resolution to obtain a second training set;
- marking optimal grasping position coordinates for an image in the second training set; and
- taking the image in the second training set and the corresponding optimal grasping position coordinates as inputs, and training the inputs based on the convolutional neural network model to obtain an optimal grasping position generation network model.
-
- acquiring a Depth image containing the target object;
- zooming the Depth image to a predetermined third resolution to obtain a third training set;
- acquiring a pair of grasping positions on the Depth image in the third training set, and calculating a corresponding score by a predetermined scoring algorithm; and
- taking the Depth image, the grasping positions and the score corresponding to the grasping positions as inputs, and training the inputs based on the convolutional neural network model to obtain the grasping position quality evaluation network.
-
- collecting at least one target RGB image and at least one target Depth image of the target object at different view angles, wherein pixel points in the at least one target RGB image have one-to-one corresponding pixel points in the at least one target Depth image;
- inputting each of the at least one target RGB image to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object in the target Depth image;
- inputting the RGB pixel region of the target object to an optimal grasping position generation network to obtain an optimal grasping position of the target object;
- inputting the Depth pixel region of the target object and the optimal grasping position to a grasping position quality evaluation network to calculate a score of the optimal grasping position; and
- selecting an optimal grasping position corresponding to a highest score as a global optimal grasping position of the robot.
-
- acquiring an RGB image containing the target object;
- zooming the RGB image to a predetermined first resolution to obtain a first training set;
- annotating a pixel region corresponding to the target object in the first training set;
- inputting the first training set and the pixel region corresponding to the target object to the convolutional neural network model for training to obtain the target object segmentation network;
- performing an overlap comparison between the RGB pixel region corresponding to the target object obtained by the target object segmentation network and the annotated pixel region corresponding to the target object; and
- adjusting a weight of the target object segmentation network according to the overlap comparison result.
-
- zooming the RGB pixel region corresponding to the target object obtained by target object segmentation network to a second predetermined resolution to obtain a second training set;
- marking optimal grasping position coordinates for an image in the second training set; and
- taking the image in the second training set and the corresponding optimal grasping position coordinates as inputs, and training the inputs based on the convolutional neural network model to obtain an optimal grasping position generation network.
-
- acquiring a Depth image containing the target object;
- zooming the Depth image to a predetermined third resolution to obtain a third training set;
- acquiring a pair of grasping positions on the Depth image in the third training set, and calculating a corresponding score by a predetermined scoring algorithm; and
- taking the Depth image, the grasping positions and the score corresponding to the grasping positions as inputs, and training the inputs based on the convolutional neural network model to obtain the grasping position quality evaluation network.
Claims (18)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811518381.7 | 2018-12-12 | ||
| CN201811518381.7A CN109658413B (en) | 2018-12-12 | 2018-12-12 | Method for detecting grabbing position of robot target object |
| PCT/CN2019/115959 WO2020119338A1 (en) | 2018-12-12 | 2019-11-06 | Method for detecting grabbing position of robot for target object |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/115959 Continuation WO2020119338A1 (en) | 2018-12-12 | 2019-11-06 | Method for detecting grabbing position of robot for target object |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210023720A1 US20210023720A1 (en) | 2021-01-28 |
| US11878433B2 true US11878433B2 (en) | 2024-01-23 |
Family
ID=66113814
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/032,399 Active 2041-12-29 US11878433B2 (en) | 2018-12-12 | 2020-09-25 | Method for detecting grasping position of robot in grasping object |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US11878433B2 (en) |
| JP (1) | JP7085726B2 (en) |
| CN (1) | CN109658413B (en) |
| WO (1) | WO2020119338A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250262770A1 (en) * | 2023-02-24 | 2025-08-21 | Kabushiki Kaisha Toshiba | Handling apparatus, handling method, and recording medium |
Families Citing this family (54)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109658413B (en) * | 2018-12-12 | 2022-08-09 | 达闼机器人股份有限公司 | Method for detecting grabbing position of robot target object |
| US11185978B2 (en) * | 2019-01-08 | 2021-11-30 | Honda Motor Co., Ltd. | Depth perception modeling for grasping objects |
| CN110136163B (en) * | 2019-04-29 | 2021-02-12 | 中国科学院自动化研究所 | Hand motion fuzzy automatic cutout and application in human body soft segmentation and background replacement |
| DE102019207411A1 (en) * | 2019-05-21 | 2020-11-26 | Robert Bosch Gmbh | Method and device for the safe operation of an estimator |
| CN112101075B (en) * | 2019-06-18 | 2022-03-25 | 腾讯科技(深圳)有限公司 | Information implantation area identification method and device, storage medium and electronic equipment |
| CN110348333A (en) * | 2019-06-21 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Object detecting method, device, storage medium and electronic equipment |
| CN111359915B (en) * | 2020-03-24 | 2022-05-24 | 广东弓叶科技有限公司 | Material sorting method and system based on machine vision |
| EP4114620A1 (en) * | 2020-04-06 | 2023-01-11 | Siemens Aktiengesellschaft | Task-oriented 3d reconstruction for autonomous robotic operations |
| CN111783537A (en) * | 2020-05-29 | 2020-10-16 | 哈尔滨莫迪科技有限责任公司 | Two-stage rapid grabbing detection method based on target detection characteristics |
| CN111652118B (en) * | 2020-05-29 | 2023-06-20 | 大连海事大学 | Marine product autonomous grabbing and guiding method based on underwater target neighbor distribution |
| US11559885B2 (en) | 2020-07-14 | 2023-01-24 | Intrinsic Innovation Llc | Method and system for grasping an object |
| US11275942B2 (en) | 2020-07-14 | 2022-03-15 | Vicarious Fpc, Inc. | Method and system for generating training data |
| US11541534B2 (en) | 2020-07-14 | 2023-01-03 | Intrinsic Innovation Llc | Method and system for object grasping |
| US12017368B2 (en) * | 2020-09-09 | 2024-06-25 | Fanuc Corporation | Mix-size depalletizing |
| CN112297013B (en) * | 2020-11-11 | 2022-02-18 | 浙江大学 | Robot intelligent grabbing method based on digital twin and deep neural network |
| US20240033907A1 (en) * | 2020-12-02 | 2024-02-01 | Ocado Innovatoin Limited | Pixelwise predictions for grasp generation |
| CN112613478B (en) * | 2021-01-04 | 2022-08-09 | 大连理工大学 | Data active selection method for robot grabbing |
| CN113781493A (en) * | 2021-01-04 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Image processing method, apparatus, electronic device, medium and computer program product |
| CN112861667A (en) * | 2021-01-26 | 2021-05-28 | 北京邮电大学 | Robot grabbing detection method based on multi-class object segmentation |
| CN112802105A (en) * | 2021-02-05 | 2021-05-14 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method and device |
| CN113160313A (en) * | 2021-03-03 | 2021-07-23 | 广东工业大学 | Transparent object grabbing control method and device, terminal and storage medium |
| CN114998413B (en) * | 2021-04-14 | 2025-09-09 | 华东师范大学 | Method and device for measuring height of package in disordered grabbing system |
| US12036678B2 (en) * | 2021-05-25 | 2024-07-16 | Fanuc Corporation | Transparent object bin picking |
| CN113327295A (en) * | 2021-06-18 | 2021-08-31 | 华南理工大学 | Robot rapid grabbing method based on cascade full convolution neural network |
| EP4341050A1 (en) * | 2021-06-25 | 2024-03-27 | Siemens Corporation | High-level sensor fusion and multi-criteria decision making for autonomous bin picking |
| CN113506314B (en) * | 2021-06-25 | 2024-04-09 | 北京精密机电控制设备研究所 | Automatic grasping method and device for symmetrical quadrilateral workpiece under complex background |
| US12172310B2 (en) * | 2021-06-29 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for picking objects using 3-D geometry and segmentation |
| CN113591841B (en) * | 2021-07-09 | 2024-07-19 | 上海德托智能工程有限公司 | Positioning method, positioning device, computer equipment and computer readable storage medium |
| CN113326666B (en) * | 2021-07-15 | 2022-05-03 | 浙江大学 | Robot intelligent grabbing method based on convolutional neural network differentiable structure searching |
| CN113744333B (en) * | 2021-08-20 | 2024-02-13 | 北京航空航天大学 | A method and device for obtaining an object grabbing position |
| CN113420746B (en) * | 2021-08-25 | 2021-12-07 | 中国科学院自动化研究所 | Robot visual sorting method and device, electronic equipment and storage medium |
| CN113762159B (en) * | 2021-09-08 | 2023-08-08 | 山东大学 | A target capture detection method and system based on a directed arrow model |
| NL2029461B1 (en) * | 2021-10-19 | 2023-05-16 | Fizyr B V | Automated bin-picking based on deep learning |
| CN114049318B (en) * | 2021-11-03 | 2025-03-04 | 重庆理工大学 | A grasping posture detection method based on multimodal fusion features |
| CN113920142B (en) * | 2021-11-11 | 2023-09-26 | 江苏昱博自动化设备有限公司 | A multi-object sorting method for sorting manipulator based on deep learning |
| CN114140418B (en) * | 2021-11-26 | 2025-07-11 | 上海交通大学宁波人工智能研究院 | Seven-DOF grasping posture detection method based on RGB image and depth image |
| CN116416444B (en) * | 2021-12-29 | 2024-04-16 | 广东美的白色家电技术创新中心有限公司 | Object grabbing point estimation, model training and data generation method, device and system |
| CN114426923B (en) * | 2022-03-31 | 2022-07-12 | 季华实验室 | Environmental virus sampling robot and method |
| CN114683251A (en) * | 2022-03-31 | 2022-07-01 | 上海节卡机器人科技有限公司 | Robot grasping method, device, electronic device and readable storage medium |
| CN114750154A (en) * | 2022-04-25 | 2022-07-15 | 贵州电网有限责任公司 | Dynamic target identification, positioning and grabbing method for distribution network live working robot |
| CN115108117B (en) * | 2022-05-26 | 2023-06-27 | 盈合(深圳)机器人与自动化科技有限公司 | Cutting method, cutting system, terminal and computer storage medium |
| CN114782827B (en) * | 2022-06-22 | 2022-10-14 | 中国科学院微电子研究所 | Object capture point acquisition method and device based on image |
| CN115147488B (en) * | 2022-07-06 | 2024-06-18 | 湖南大学 | A workpiece pose estimation method and grasping system based on dense prediction |
| US12304070B2 (en) * | 2022-09-23 | 2025-05-20 | Fanuc Corporation | Grasp teach by human demonstration |
| CN116079711A (en) * | 2022-11-30 | 2023-05-09 | 西北工业大学 | Detection method for improving grabbing success rate of robot |
| CN115984668A (en) * | 2023-01-04 | 2023-04-18 | 杭州海康威视数字技术股份有限公司 | Grasping point prediction model training method, object grasping point determination method and device |
| CN115965641B (en) * | 2023-01-16 | 2025-08-15 | 杭州电子科技大学 | Deeplabv3+ network-based pharyngeal image segmentation and positioning method |
| CN116468937B (en) * | 2023-03-30 | 2025-11-18 | 湖南人文科技学院 | Category determination and grasping pose localization method, storage medium and terminal device |
| CN116399871B (en) * | 2023-04-19 | 2023-11-14 | 广州市阳普机电工程有限公司 | Automobile part assembly detection system and method based on machine vision |
| CN116950429B (en) * | 2023-07-31 | 2024-07-23 | 中建八局发展建设有限公司 | Quick positioning and splicing method, medium and system for large spliced wall |
| CN116749241B (en) * | 2023-08-16 | 2023-11-07 | 苏州视谷视觉技术有限公司 | Machine vision high accuracy location grabbing device |
| CN117067219B (en) * | 2023-10-13 | 2023-12-15 | 广州朗晴电动车有限公司 | Sheet metal mechanical arm control method and system for trolley body molding |
| CN117773944B (en) * | 2024-01-18 | 2025-03-04 | 中移雄安信息通信科技有限公司 | Method, device, equipment and storage medium for predicting grabbing position of mechanical arm |
| CN121083378B (en) * | 2025-11-13 | 2026-02-03 | 四川福摩斯工业技术有限公司 | A method and system for feeding materials using an intelligent robot |
Citations (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5325468A (en) | 1990-10-31 | 1994-06-28 | Sanyo Electric Co., Ltd. | Operation planning system for robot |
| US20160279791A1 (en) | 2015-03-24 | 2016-09-29 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
| CN106041937A (en) | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
| CN106737692A (en) | 2017-02-10 | 2017-05-31 | 杭州迦智科技有限公司 | A kind of mechanical paw Grasp Planning method and control device based on depth projection |
| CN106780605A (en) | 2016-12-20 | 2017-05-31 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of detection method of the object crawl position based on deep learning robot |
| CN106874914A (en) | 2017-01-12 | 2017-06-20 | 华南理工大学 | A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks |
| CN107972026A (en) | 2016-10-25 | 2018-05-01 | 深圳光启合众科技有限公司 | Robot, mechanical arm and its control method and device |
| CN108058172A (en) | 2017-11-30 | 2018-05-22 | 深圳市唯特视科技有限公司 | A kind of manipulator grasping means based on autoregression model |
| CN108229678A (en) | 2017-10-24 | 2018-06-29 | 深圳市商汤科技有限公司 | Network training method, method of controlling operation thereof, device, storage medium and equipment |
| CN108247601A (en) | 2018-02-09 | 2018-07-06 | 中国科学院电子学研究所 | Semantic crawl robot based on deep learning |
| CN108280856A (en) | 2018-02-09 | 2018-07-13 | 哈尔滨工业大学 | The unknown object that network model is inputted based on mixed information captures position and orientation estimation method |
| CN108510062A (en) | 2018-03-29 | 2018-09-07 | 东南大学 | A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network |
| CN108648233A (en) | 2018-03-24 | 2018-10-12 | 北京工业大学 | A kind of target identification based on deep learning and crawl localization method |
| WO2018221614A1 (en) | 2017-05-31 | 2018-12-06 | 株式会社Preferred Networks | Learning device, learning method, learning model, estimation device, and grip system |
| US10166676B1 (en) * | 2016-06-08 | 2019-01-01 | X Development Llc | Kinesthetic teaching of grasp parameters for grasping of objects by a grasping end effector of a robot |
| US20190005848A1 (en) * | 2017-06-29 | 2019-01-03 | Verb Surgical Inc. | Virtual reality training, simulation, and collaboration in a robotic surgical system |
| CN109658413A (en) | 2018-12-12 | 2019-04-19 | 深圳前海达闼云端智能科技有限公司 | A kind of method of robot target grasping body position detection |
| US20200171665A1 (en) * | 2016-06-20 | 2020-06-04 | Mitsubishi Heavy Industries, Ltd. | Robot control system and robot control method |
| US20200290206A1 (en) * | 2019-03-14 | 2020-09-17 | Fanuc Corporation | Operation tool for grasping workpiece including connector and robot apparatus including operation tool |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10089575B1 (en) | 2015-05-27 | 2018-10-02 | X Development Llc | Determining grasping parameters for grasping of an object by a robot grasping end effector |
| US11173599B2 (en) | 2016-05-20 | 2021-11-16 | Google Llc | Machine learning methods and apparatus related to predicting motion(s) of object(s) in a robot's environment based on image(s) capturing the object(s) and based on parameter(s) for future robot movement in the environment |
-
2018
- 2018-12-12 CN CN201811518381.7A patent/CN109658413B/en active Active
-
2019
- 2019-11-06 WO PCT/CN2019/115959 patent/WO2020119338A1/en not_active Ceased
- 2019-11-06 JP JP2020543212A patent/JP7085726B2/en active Active
-
2020
- 2020-09-25 US US17/032,399 patent/US11878433B2/en active Active
Patent Citations (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5325468A (en) | 1990-10-31 | 1994-06-28 | Sanyo Electric Co., Ltd. | Operation planning system for robot |
| US20160279791A1 (en) | 2015-03-24 | 2016-09-29 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
| US10166676B1 (en) * | 2016-06-08 | 2019-01-01 | X Development Llc | Kinesthetic teaching of grasp parameters for grasping of objects by a grasping end effector of a robot |
| US20200171665A1 (en) * | 2016-06-20 | 2020-06-04 | Mitsubishi Heavy Industries, Ltd. | Robot control system and robot control method |
| CN106041937A (en) | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
| CN107972026A (en) | 2016-10-25 | 2018-05-01 | 深圳光启合众科技有限公司 | Robot, mechanical arm and its control method and device |
| CN106780605A (en) | 2016-12-20 | 2017-05-31 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of detection method of the object crawl position based on deep learning robot |
| CN106874914A (en) | 2017-01-12 | 2017-06-20 | 华南理工大学 | A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks |
| CN106737692A (en) | 2017-02-10 | 2017-05-31 | 杭州迦智科技有限公司 | A kind of mechanical paw Grasp Planning method and control device based on depth projection |
| WO2018221614A1 (en) | 2017-05-31 | 2018-12-06 | 株式会社Preferred Networks | Learning device, learning method, learning model, estimation device, and grip system |
| US20190005848A1 (en) * | 2017-06-29 | 2019-01-03 | Verb Surgical Inc. | Virtual reality training, simulation, and collaboration in a robotic surgical system |
| CN108229678A (en) | 2017-10-24 | 2018-06-29 | 深圳市商汤科技有限公司 | Network training method, method of controlling operation thereof, device, storage medium and equipment |
| CN108058172A (en) | 2017-11-30 | 2018-05-22 | 深圳市唯特视科技有限公司 | A kind of manipulator grasping means based on autoregression model |
| CN108280856A (en) | 2018-02-09 | 2018-07-13 | 哈尔滨工业大学 | The unknown object that network model is inputted based on mixed information captures position and orientation estimation method |
| CN108247601A (en) | 2018-02-09 | 2018-07-06 | 中国科学院电子学研究所 | Semantic crawl robot based on deep learning |
| CN108648233A (en) | 2018-03-24 | 2018-10-12 | 北京工业大学 | A kind of target identification based on deep learning and crawl localization method |
| CN108510062A (en) | 2018-03-29 | 2018-09-07 | 东南大学 | A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network |
| CN109658413A (en) | 2018-12-12 | 2019-04-19 | 深圳前海达闼云端智能科技有限公司 | A kind of method of robot target grasping body position detection |
| US20200290206A1 (en) * | 2019-03-14 | 2020-09-17 | Fanuc Corporation | Operation tool for grasping workpiece including connector and robot apparatus including operation tool |
Non-Patent Citations (3)
| Title |
|---|
| 1st Office Action dated May 25, 2022 by the CN Office; Appln.No. 201811518381.7. |
| 1st Office Action dated Sep. 21, 2021 by the JP Office; Appln.No. 2020-543212. |
| International Search Report dated Jan. 2, 2020; PCT/CN2019/115959. |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20250262770A1 (en) * | 2023-02-24 | 2025-08-21 | Kabushiki Kaisha Toshiba | Handling apparatus, handling method, and recording medium |
| US12466078B2 (en) * | 2023-02-24 | 2025-11-11 | Kabushiki Kaisha Toshiba | Handling apparatus, handling method, and recording medium |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2021517681A (en) | 2021-07-26 |
| US20210023720A1 (en) | 2021-01-28 |
| CN109658413B (en) | 2022-08-09 |
| WO2020119338A1 (en) | 2020-06-18 |
| CN109658413A (en) | 2019-04-19 |
| JP7085726B2 (en) | 2022-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11878433B2 (en) | Method for detecting grasping position of robot in grasping object | |
| CN109740665B (en) | Method and system for detecting ship target with occluded image based on expert knowledge constraint | |
| CN110059558B (en) | Orchard obstacle real-time detection method based on improved SSD network | |
| CN111144322A (en) | A sorting method, device, equipment and storage medium | |
| WO2020177432A1 (en) | Multi-tag object detection method and system based on target detection network, and apparatuses | |
| CN110176027A (en) | Video target tracking method, device, equipment and storage medium | |
| JP7770581B2 (en) | Facial pose estimation method, device, electronic device, and storage medium | |
| CN111931764A (en) | Target detection method, target detection framework and related equipment | |
| CN106650827A (en) | Human body posture estimation method and system based on structure guidance deep learning | |
| CN115797736B (en) | Object detection model training and object detection method, device, equipment and medium | |
| CN114419428A (en) | A target detection method, target detection device and computer readable storage medium | |
| CN111047626A (en) | Target tracking method, device, electronic device and storage medium | |
| CN108520197A (en) | A kind of Remote Sensing Target detection method and device | |
| CN111091101B (en) | High-precision pedestrian detection method, system and device based on one-step method | |
| CN113240716B (en) | Twin network target tracking method and system with multi-feature fusion | |
| CN119540942B (en) | A SLAM method and system for dynamic environment dense point cloud based on YOLOv11 and ORB-SLAM3 | |
| CN114743045A (en) | A Small-Sample Object Detection Method Based on Dual-branch Region Proposal Network | |
| CN107545263A (en) | A kind of object detecting method and device | |
| CN112308917B (en) | A vision-based mobile robot positioning method | |
| CN114972492A (en) | A bird's-eye view-based pose determination method, device and computer storage medium | |
| CN110310305A (en) | A target tracking method and device based on BSSD detection and Kalman filter | |
| Zhou et al. | Object detection in low-light conditions based on DBS-YOLOv8 | |
| CN117237620A (en) | A single target positioning method based on lightweight improved Unet semantic segmentation network | |
| CN107995442A (en) | Processing method, device and the computing device of video data | |
| Cao et al. | A novel YOLOv5-Based hybrid underwater target detection algorithm combining with CBAM and CIoU |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DU, GUOGUANG;WANG, KAI;LIAN, SHIGUO;REEL/FRAME:053891/0217 Effective date: 20200922 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
| AS | Assignment |
Owner name: CLOUDMINDS ROBOTICS CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS CO., LTD.;REEL/FRAME:055625/0290 Effective date: 20210302 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: DATAA NEW TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLOUDMINDS ROBOTICS CO., LTD.;REEL/FRAME:072052/0055 Effective date: 20250106 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |