CN109102547A - Robot based on object identification deep learning model grabs position and orientation estimation method - Google Patents
Robot based on object identification deep learning model grabs position and orientation estimation method Download PDFInfo
- Publication number
- CN109102547A CN109102547A CN201810803444.7A CN201810803444A CN109102547A CN 109102547 A CN109102547 A CN 109102547A CN 201810803444 A CN201810803444 A CN 201810803444A CN 109102547 A CN109102547 A CN 109102547A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- algorithm
- target
- deep learning
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of, and the robot based on object identification deep learning model grabs position and orientation estimation method, it is related to technical field of computer vision, this method is based on RGBD camera and deep learning, comprising the following steps: S1: carrying out camera parameter calibration and hand and eye calibrating;S2: training objective detection object model;S3: target object three-dimensional point cloud template library is established;S4: type and position of the identification to each article in capture area;S5: merging two and three dimensions visual information and obtains the point cloud of specific objective object;S6: the estimation of target object pose is completed;S7: algorithm is evaded using the mistake based on sample accumulation, the situation of mistake is evaded;S8: during robot end is mobile to target object, vision system constantly repeats step S4 to S7, realizes the iteration optimization of target object pose estimation.Inventive algorithm carries out the detection of fast target early period using target detection YOLO model, reduces three-dimensional point cloud segmentation and matched calculation amount, improves operation efficiency and accuracy rate.
Description
Technical field
The present invention relates to technical field of computer vision, more particularly to the robot based on object identification deep learning model
Grab position and orientation estimation method.
Background technique
In recent years, with the continuous development of computer vision field, machine vision technique have been widely used for manufacture and
The every field of service trade.The combination of machine vision technique and other traditional subjects also becomes increasingly close, agriculture in building,
Medical treatment, the fields such as traffic produce significant impact, especially extensive in the application of machinery field machine vision, wherein view-based access control model
Robot crawl also become a current research hotspot.
Visual field can be divided into monocular vision, binocular vision and deep vision according to the difference of sensor, wherein monocular
Vision is mainly used in two dimensional image field, and in 3D vision field, monocular vision performance within the scope of outdoor large scale compares
Stablize, but is not suitable for the application scenarios of robot crawl;Binocular vision is wider in the application of three-dimensional stereoscopic visual field, but its
Algorithm is complicated, obtains high accuracy three-dimensional information and needs to sacrifice certain time performance;And in deep vision, it is based on structure light principle
RGBD camera grab indoors in the environment of can obtain higher precision, therefore be using RGBD camera in robot crawl
The research hotspot in the field.
Therefore, those skilled in the art is dedicated to developing the crawl of the robot based on object identification deep learning model position
Orientation estimation method is and to carry out the position of target object by using the RGBD phase machine testing target object to be crawled by robots
The method of appearance estimation.
Summary of the invention
In view of the above drawbacks of the prior art, the technical problem to be solved by the present invention is to overcome algorithm in the prior art
Complexity obtains the problem of high accuracy three-dimensional information takes a long time, improves the precision of acquisition of information and the efficiency of crawl.
To achieve the above object, the present invention provides the robot crawl poses based on object identification deep learning model to estimate
Meter method, comprising the following steps:
Step S1: being mounted on robot end for RGBD camera, based on Zhang Zhengyou camera calibration algorithm and Tsai-Lenz hand
Eye calibration algorithm, carries out camera parameter calibration and hand and eye calibrating;
Step S2: largely the two dimensional image comprising the various target items to be grabbed is trained as YOLO training set for acquisition
It is able to detect the YOLO model of target object;
Step S3: using the point cloud segmentation algorithm and " five frame methods " splice point cloud grown based on color region, target is established
The three-dimensional point cloud template library of object;
Step S4: using trained YOLO model in step S2, the type to each object in capture area and position are identified
It sets;
Step S5: merging and obtaining target object point cloud for two-dimensional visual information and 3D vision information is carried out;
Step S6: using by the point cloud of target object with by the way of object point cloud template is registrated in template library, into
The pose of row target object is estimated;
Step S7: algorithm is evaded using the mistake based on sample accumulation, is evaded for the situation to mistake;
Step S8: during robot end is mobile to target object, vision system constantly repeats step S4 to step
Rapid S7 link, realizes the iteration optimization of target object pose estimation.
Further, the method is to realize that data type interacts based on ROS operating system, including image and three-dimensional are believed
Breath obtains, robot end's pose obtains, various matrix operation links.
Further, the RGBD depth transducer need to include environment RGB information and Depth Information Acquistion function, and described
Two category informations need high registration accuracy.
Further, the deep learning algorithm based on YOLO can identify known object by training in advance;It is described
Deep learning algorithm based on YOLO can directly extract the bounding box and type of target object.
Further, the acquisition target object point cloud needs the identification of two dimensional image and three-dimensional raw information carrying out data
Fusion;Target point cloud and environment are split using figure excision partitioning algorithm, and according to the lower threshold of cloud quantity to each
A cloud mass carries out regrowth, the final three-dimensional information for extracting object.
Further, the algorithm of the pose estimation includes: first special to three-dimensional FPFH based on the thick matching algorithm of the FPFH
Sign point cloud carries out thick matching and obtains target initial value, recycles the ICP essence matching algorithm to be adjusted initial value, obtains target essence
True pose.
Further, it is the sample by adding up different estimation conditions that the mistake of the sample accumulation, which evades algorithm, will be every
Secondary transformation matrix is clustered, and the match condition of negligible amounts is regarded as mistake and is evaded.
Further, the algorithm of the pose estimation is to be converged by pre-establishing standard point, and be based on the standard point
Cloud estimates the pose of target object, and the calculating of target object posture information is completed based on data set.
Further, the step S5 further includes the point cloud regrowth algorithmic procedure after segmentation, specifically:
Step S5-1: using figure excision partitioning algorithm divided after point cloud mass, and it is each be independently stored in it is to be generated
In long figure, then target point cloud emptied;
Step S5-2: obtaining and put the most point cloud mass of cloud quantity, if this cloud mass is not plane and not sky, incites somebody to action
This cloud mass incorporates target point cloud;
Step S5-3: checking the number of target point cloud, if being less than least cloud number of the object, return step S5-2;
Step S5-4: output target point cloud is as final segmentation result.
Further, the flatness threshold value of the point cloud segmentation is 0.2, and every kind of object point cloud lower threshold accounts for corresponding templates
The 80% of face point cloud quantity, evades matrix similarity threshold 0.1 in algorithm in library.
The present invention carries out the object detection of two-dimentional level using deep learning algorithm YOLO, can be fast in complicated scene
Speed identifies the type of target object and is substantially positioned;It is carried out information with three-dimensional point cloud later to merge, utilizes a cloud
Segmentation and point cloud registering technology accurately estimate position and the posture of target object in one second, and carry out not with movement
It is disconnected to be iterated optimization, evade algorithm using the mistake based on sample accumulation, significantly improves the multiple position in iterative process
The success rate of appearance estimation, while point cloud segmentation range is reduced by fusion process, and then improve the computational efficiency of whole system.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with
It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the flow chart of the vision system operation of a preferred embodiment of the invention;
Fig. 2 is the schematic diagram of the iteration optimization of a preferred embodiment of the invention.
Specific embodiment
Multiple preferred embodiments of the invention are introduced below with reference to Figure of description, keep its technology contents more clear and just
In understanding.The present invention can be emerged from by many various forms of embodiments, and protection scope of the present invention not only limits
The embodiment that Yu Wenzhong is mentioned.
In the accompanying drawings, the identical component of structure is indicated with same numbers label, everywhere the similar component of structure or function with
Like numeral label indicates.The size and thickness of each component shown in the drawings are to be arbitrarily shown, and there is no limit by the present invention
The size and thickness of each component.Apparent in order to make to illustrate, some places suitably exaggerate the thickness of component in attached drawing.
As shown in Figure 1, the artificial UR5 robot of the machine used in the embodiment of the present invention, the RGBD camera used is ratio difficult to understand
The ASTRA mini of middle light.Camera is installed in robot end, constitutes eye-in-hand hand-eye system.It is to be identified and fixed
The target object of position has 7 kinds, is desktop familiar object, the experimental bench at random that is placed in waits for capture area.Institute through the invention
The target object identification stated and position and orientation estimation method, obtain the accurate pose of some certain objects, to be subsequent robot
Crawl provides goal directed.Specific implementation step is as follows:
S1: being mounted on robot end for RGBD camera, based on Zhang Zhengyou camera calibration algorithm and Tsai-Lenz trick mark
Determine algorithm, carries out camera parameter calibration and hand and eye calibrating.
The generation that camera_calibration program bag completes camera parameter configuration file, while benefit are carried by ROS
Verifying is compared to calibration result with MATLAB camera calibration tool box and MSU GML calibration tool.Have chosen 12 × 13 and 9
× 12 two kinds of gridiron pattern scaling boards are repeated several times after calibration for demarcating, choose one group of most stable, best ginseng of test effect
Number.
S2: largely for the two dimensional image comprising the various target items to be grabbed as YOLO training set, training can for acquisition
Detect the YOLO model of target object.
Using robot fixed motion track method, video is shot using the RGBD camera of robot end.In Trajectory Design
The shooting to each attitude angle distance of object is contained, to guarantee the data uniformity in test set.From the medium ratio of video
Example takes out frame, obtains 1600 raw data set pictures altogether, after carrying out data enhancing, is trained on YOLOv2 model.
After tested, which reaches 94% to the recognition accuracy of target item.
S3: using the point cloud segmentation algorithm and " five frame methods " splice point cloud grown based on color region, target object is established
Three-dimensional point cloud template library.
A possibility that point cloud quality in template library wants high, and noise is few, and feature is obvious, matched well is unique.Still it uses
The target detection of YOLO is fused to three-dimensional point cloud later and carries out point cloud segmentation again, it is to be particularly noted that the segmentation at this moment used
Algorithm is no longer the partitioning algorithm of figure excision, but the partitioning algorithm based on color region growth.Although this algorithm it is time-consuming compared with
It is long, but obtained point cloud quality is high, while may be that target object creates the environmental colors background for being especially advantageous for segmentation.?
To after the point cloud of target object, match point cloud is recycled using " five frame methods ", is spliced into more complete object dimensional point cloud template
Library.
S4: using trained YOLO model in S2, the type to each object in capture area and position are identified.
Trained YOLO model can quickly identify the kind of article with the speed of 30 frame per second in two-dimensional visual information level
Class and position export the type and position prediction frame of each object.In the prediction block for wherein finding some specified target object,
The width and height for exporting its center and prediction block, the input as the estimation of succeeding target object pose.
S5: merging and obtaining target object point cloud for two-dimensional visual information and 3D vision information is carried out.
According to the coordinate of the two-dimensional pixel of the YOLO object frame provided, corresponding frame inside branch in three-dimensional point cloud is extracted
Cloud.It is taken based on the figure excision partitioning algorithm of SVM, and combines the point cloud regrowth algorithm after segmentation, from obtained frame inner part
The point cloud of target object is obtained in point cloud.
Point cloud regrowth algorithmic procedure after segmentation is as follows:
(1) the point cloud mass after being divided using figure excision partitioning algorithm, and each independently it is stored in figure to be grown
In, then target point cloud emptied.
(2) it obtains and puts the most point cloud mass of cloud quantity, if this cloud mass is not plane and not sky (NaN), incite somebody to action
This cloud mass incorporates target point cloud.
(3) check that the number of target point cloud returns to (2), otherwise hold if being less than threshold value (least cloud number of the object)
Row.
(4) output target point cloud is as final segmentation result.
S6: using by the point cloud of target object with by the way of object point cloud template is registrated in template library, to carry out mesh
Mark the pose estimation of object.
Point cloud matching algorithm based on FPFH feature, requires initial value, but since it is with randomness, matches
Quasi- precision is affected, and there are certain error hiding probability, but can be provided well just for essence matching substantially
Value;And the registration accuracy of the point cloud matching algorithm based on ICP will be much higher than the registration mode based on feature, and have stronger
Robustness and adaptability, but requirement of the ICP to initial value is very stringent, it is easy to fall into local optimum.Therefore by base in the present invention
It is matched in the matching of FPFH feature as thick, gives ICP to carry out smart matching, the change for finally obtaining two for its result as initial value
Matrix multiple is changed, overall pose estimated matrix is obtained.
S7: in last pose estimation stages, evading algorithm using the mistake based on sample accumulation, for the situation to mistake
Evaded.
The basic ideas that mistake based on sample accumulation evades algorithm are to determine that executing certain primary movement more than depends on
Transformation matrix this time, and before allowing for the case where transformation matrix, finally execute be in the most type of quantity most
The error hiding transformation matrix of negligible amounts thus can be evaded falling by new transformation matrix.S4 is extremely before can evading falling in this way
Mistake caused by the various factors of tri- links of S6, therefore the success rate of vision system can be significantly improved.
S8: during robot end is mobile to target object, vision system constantly repeats S4 to S7 link, realizes
The iteration optimization of target object pose estimation.
As shown in Fig. 2, e2 is arrived, e3... can be seen that the mesh of camera from e1 in the target position of camera during iteration
Cursor position becomes closer to apart from true value, this is because the point cloud quantity of acquisition becomes more, and posture corrigendum after close
It is more complete, therefore iterative process can play the effect of optimization, to reduce the error finally grabbed.
When implementation, need to train YOLO model in advance, and pre-establish the point cloud template of target object.
Preferably, in order to guarantee to obtain the integrality for putting cloud, target object is tried not in face of the side of camera comprising recessed
Face.
Preferably, target object should be in 0.6~1m apart from camera initial distance.
Preferably, flatness threshold value is 0.2 when point cloud segmentation, and every kind of object point cloud lower threshold accounts in corresponding templates library just
To the 80% of cloud quantity, evade matrix similarity threshold 0.1 in algorithm.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound
The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art
Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Scheme, all should be within the scope of protection determined by the claims.
Claims (10)
1. a kind of robot based on object identification deep learning model grabs position and orientation estimation method, which is characterized in that including with
Lower step:
Step S1: being mounted on robot end for RGBD camera, based on Zhang Zhengyou camera calibration algorithm and Tsai-Lenz trick mark
Determine algorithm, carries out camera parameter calibration and hand and eye calibrating;
Step S2: largely for the two dimensional image comprising the various target items to be grabbed as YOLO training set, training can for acquisition
Detect the YOLO model of target object;
Step S3: using the point cloud segmentation algorithm and " five frame methods " splice point cloud grown based on color region, target object is established
Three-dimensional point cloud template library;
Step S4: using trained YOLO model in step S2, the type to each object in capture area and position are identified;
Step S5: merging and obtaining target object point cloud for two-dimensional visual information and 3D vision information is carried out;
Step S6: using by the point cloud of target object with by the way of object point cloud template is registrated in template library, to carry out mesh
Mark the pose estimation of object;
Step S7: algorithm is evaded using the mistake based on sample accumulation, is evaded for the situation to mistake;
Step S8: during robot end is mobile to target object, vision system constantly repeats step S4 to step S7
Link realizes the iteration optimization of target object pose estimation.
2. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the method is to realize that data type interacts based on ROS operating system, including image and three-dimensional information obtain, machine
The acquisition of device people end pose, various matrix operation links.
3. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the RGBD depth transducer need to include environment RGB information and Depth Information Acquistion function, and two category information needs
High registration accuracy.
4. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the deep learning algorithm based on YOLO can identify known object by training in advance;It is described based on YOLO's
Deep learning algorithm can directly extract the bounding box and type of target object.
5. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the acquisition target object point cloud needs the identification of two dimensional image and three-dimensional raw information carrying out data fusion;It utilizes
Figure excision partitioning algorithm target point cloud and environment are split, and according to the lower threshold of cloud quantity to each cloud mass into
Row regrowth, the final three-dimensional information for extracting object.
6. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the algorithm of the pose estimation includes: first to carry out based on the thick matching algorithm of the FPFH to three-dimensional FPFH characteristic point cloud
Thick matching obtains target initial value, and the ICP essence matching algorithm is recycled to be adjusted initial value, obtains the accurate pose of target.
7. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that it is the sample by adding up different estimation conditions that the mistake of the sample accumulation, which evades algorithm, by each transformation matrix
It is clustered, and the match condition of negligible amounts is regarded as mistake and is evaded.
8. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the algorithm of the pose estimation is to converge by pre-establishing standard point, and be based on the standard point cloud to object
The pose of body is estimated, the calculating of target object posture information is completed based on data set.
9. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the step S5 further includes the point cloud regrowth algorithmic procedure after segmentation, specifically:
Step S5-1: using figure excision partitioning algorithm divided after point cloud mass, and it is each be independently stored in it is to be grown
In figure, then target point cloud emptied;
Step S5-2: the most point cloud mass of point cloud quantity is obtained, if this cloud mass is not plane and not sky, by this
Point cloud mass incorporates target point cloud;
Step S5-3: checking the number of target point cloud, if being less than least cloud number of the object, return step S5-2;
Step S5-4: output target point cloud is as final segmentation result.
10. the robot based on object identification deep learning model grabs position and orientation estimation method as described in claim 1, special
Sign is that the flatness threshold value of the point cloud segmentation is 0.2, and every kind of object point cloud lower threshold accounts for face point in corresponding templates library
The 80% of cloud quantity evades matrix similarity threshold 0.1 in algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803444.7A CN109102547A (en) | 2018-07-20 | 2018-07-20 | Robot based on object identification deep learning model grabs position and orientation estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803444.7A CN109102547A (en) | 2018-07-20 | 2018-07-20 | Robot based on object identification deep learning model grabs position and orientation estimation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109102547A true CN109102547A (en) | 2018-12-28 |
Family
ID=64847072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810803444.7A Pending CN109102547A (en) | 2018-07-20 | 2018-07-20 | Robot based on object identification deep learning model grabs position and orientation estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102547A (en) |
Cited By (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903331A (en) * | 2019-01-08 | 2019-06-18 | 杭州电子科技大学 | A kind of convolutional neural networks object detection method based on RGB-D camera |
CN109940616A (en) * | 2019-03-21 | 2019-06-28 | 佛山智能装备技术研究院 | One kind being based on brain-cerebella model intelligent grabbing system and method |
CN109978949A (en) * | 2019-03-26 | 2019-07-05 | 南开大学 | A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate |
CN110000783A (en) * | 2019-04-04 | 2019-07-12 | 上海节卡机器人科技有限公司 | Robotic vision grasping means and device |
CN110060257A (en) * | 2019-02-22 | 2019-07-26 | 叠境数字科技(上海)有限公司 | A kind of RGBD hair dividing method based on different hair styles |
CN110232710A (en) * | 2019-05-31 | 2019-09-13 | 深圳市皕像科技有限公司 | Article localization method, system and equipment based on three-dimensional camera |
CN110246127A (en) * | 2019-06-17 | 2019-09-17 | 南京工程学院 | Workpiece identification and localization method and system, sorting system based on depth camera |
CN110276806A (en) * | 2019-05-27 | 2019-09-24 | 江苏大学 | Online hand-eye calibration and crawl pose calculation method for four-freedom-degree parallel-connection robot stereoscopic vision hand-eye system |
CN110276793A (en) * | 2019-06-05 | 2019-09-24 | 北京三快在线科技有限公司 | A kind of method and device for demarcating three-dimension object |
CN110287873A (en) * | 2019-06-25 | 2019-09-27 | 清华大学深圳研究生院 | Noncooperative target pose measuring method, system and terminal device based on deep neural network |
CN110298885A (en) * | 2019-06-18 | 2019-10-01 | 仲恺农业工程学院 | Stereoscopic vision identification method and positioning clamping detection device for non-smooth spheroid object and application of stereoscopic vision identification method and positioning clamping detection device |
CN110322512A (en) * | 2019-06-28 | 2019-10-11 | 中国科学院自动化研究所 | In conjunction with the segmentation of small sample example and three-dimensional matched object pose estimation method |
CN110322515A (en) * | 2019-07-02 | 2019-10-11 | 工极智能科技(苏州)有限公司 | Workpiece identification and grabbing point extraction method based on binocular stereo vision |
CN110378325A (en) * | 2019-06-20 | 2019-10-25 | 西北工业大学 | A kind of object pose recognition methods during robot crawl |
CN110544279A (en) * | 2019-08-26 | 2019-12-06 | 华南理工大学 | pose estimation method combining image identification and genetic algorithm fine registration |
CN110555889A (en) * | 2019-08-27 | 2019-12-10 | 西安交通大学 | CALTag and point cloud information-based depth camera hand-eye calibration method |
CN110653820A (en) * | 2019-09-29 | 2020-01-07 | 东北大学 | Robot grabbing pose estimation method combined with geometric constraint |
CN110910452A (en) * | 2019-11-26 | 2020-03-24 | 上海交通大学 | Low-texture industrial part pose estimation method based on deep learning |
CN110930452A (en) * | 2019-10-23 | 2020-03-27 | 同济大学 | Object pose estimation method based on self-supervision learning and template matching |
CN110969660A (en) * | 2019-12-17 | 2020-04-07 | 浙江大学 | Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning |
CN111080693A (en) * | 2019-11-22 | 2020-04-28 | 天津大学 | Robot autonomous classification grabbing method based on YOLOv3 |
CN111127556A (en) * | 2019-11-29 | 2020-05-08 | 合刃科技(上海)有限公司 | Target object identification and pose estimation method and device based on 3D vision |
CN111127638A (en) * | 2019-12-30 | 2020-05-08 | 芜湖哈特机器人产业技术研究院有限公司 | Method for realizing positioning and grabbing point of protruding mark position of workpiece by using three-dimensional template library |
CN111145257A (en) * | 2019-12-27 | 2020-05-12 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
CN111178250A (en) * | 2019-12-27 | 2020-05-19 | 深圳市越疆科技有限公司 | Object identification positioning method and device and terminal equipment |
CN111222480A (en) * | 2020-01-13 | 2020-06-02 | 佛山科学技术学院 | Grape weight online estimation method and detection device based on deep learning |
CN111216124A (en) * | 2019-12-02 | 2020-06-02 | 广东技术师范大学 | Robot vision guiding method and device based on integration of global vision and local vision |
CN111259934A (en) * | 2020-01-09 | 2020-06-09 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111347426A (en) * | 2020-03-26 | 2020-06-30 | 季华实验室 | Mechanical arm accurate placement track planning method based on 3D vision |
CN111524115A (en) * | 2020-04-17 | 2020-08-11 | 湖南视比特机器人有限公司 | Positioning method and sorting system for steel plate cutting piece |
CN111652928A (en) * | 2020-05-11 | 2020-09-11 | 上海交通大学 | Method for detecting object grabbing pose in three-dimensional point cloud |
CN111768449A (en) * | 2019-03-30 | 2020-10-13 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN111784770A (en) * | 2020-06-28 | 2020-10-16 | 河北工业大学 | Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm |
CN111815706A (en) * | 2020-06-23 | 2020-10-23 | 熵智科技(深圳)有限公司 | Visual identification method, device, equipment and medium for single-article unstacking |
CN111881887A (en) * | 2020-08-21 | 2020-11-03 | 董秀园 | Multi-camera-based motion attitude monitoring and guiding method and device |
CN112074868A (en) * | 2018-12-29 | 2020-12-11 | 河南埃尔森智能科技有限公司 | Industrial robot positioning method and device based on structured light, controller and medium |
CN112215861A (en) * | 2020-09-27 | 2021-01-12 | 深圳市优必选科技股份有限公司 | Football detection method and device, computer readable storage medium and robot |
CN112465903A (en) * | 2020-12-21 | 2021-03-09 | 上海交通大学宁波人工智能研究院 | 6DOF object attitude estimation method based on deep learning point cloud matching |
CN112487960A (en) * | 2020-11-27 | 2021-03-12 | 同济大学 | Machine vision-based toilet bowl embryo in-vitro flexible bonding method and system |
CN112476434A (en) * | 2020-11-24 | 2021-03-12 | 新拓三维技术(深圳)有限公司 | Visual 3D pick-and-place method and system based on cooperative robot |
CN112614179A (en) * | 2020-12-16 | 2021-04-06 | 南昌航空大学 | Automatic plate-type honeycomb ceramic assembling method based on vision |
CN112790786A (en) * | 2020-12-30 | 2021-05-14 | 无锡祥生医疗科技股份有限公司 | Point cloud data registration method and device, ultrasonic equipment and storage medium |
WO2021103824A1 (en) * | 2019-11-26 | 2021-06-03 | 广东技术师范大学 | Key point position determining method and device in robot hand-eye calibration based on calibration block |
CN112902966A (en) * | 2021-01-28 | 2021-06-04 | 开放智能机器(上海)有限公司 | Fusion positioning system and method |
CN113021355A (en) * | 2021-03-31 | 2021-06-25 | 重庆正格技术创新服务有限公司 | Agricultural robot operation method for predicting sheltered crop picking point |
WO2021135321A1 (en) * | 2019-12-30 | 2021-07-08 | 苏宁云计算有限公司 | Object positioning method and apparatus, and computer system |
CN113232015A (en) * | 2020-05-27 | 2021-08-10 | 杭州中为光电技术有限公司 | Robot space positioning and grabbing control method based on template matching |
CN113246145A (en) * | 2021-07-02 | 2021-08-13 | 杭州景业智能科技股份有限公司 | Pose compensation method and system for nuclear industry grabbing equipment and electronic device |
CN113344769A (en) * | 2021-04-20 | 2021-09-03 | 梅卡曼德(北京)机器人科技有限公司 | Method, device and medium for acquiring 3D image information of article based on machine vision |
CN113379849A (en) * | 2021-06-10 | 2021-09-10 | 南开大学 | Robot autonomous recognition intelligent grabbing method and system based on depth camera |
CN113610921A (en) * | 2021-08-06 | 2021-11-05 | 沈阳风驰软件股份有限公司 | Hybrid workpiece grabbing method, device and computer-readable storage medium |
CN113781561A (en) * | 2021-09-09 | 2021-12-10 | 诺力智能装备股份有限公司 | Target pose estimation method based on self-adaptive Gaussian weight fast point feature histogram |
CN113785303A (en) * | 2019-05-06 | 2021-12-10 | 库卡德国有限公司 | Machine learning object recognition by means of a robot-guided camera |
CN114155256A (en) * | 2021-10-21 | 2022-03-08 | 北京航空航天大学 | Method and system for tracking deformation of flexible object by using RGBD camera |
US11288828B2 (en) | 2019-11-21 | 2022-03-29 | Industrial Technology Research Institute | Object recognition system based on machine learning and method thereof |
CN114663752A (en) * | 2022-02-28 | 2022-06-24 | 江苏大学 | Edible bean yield intelligent estimation method and system based on machine vision |
CN114912287A (en) * | 2022-05-26 | 2022-08-16 | 四川大学 | Robot autonomous grabbing simulation system and method based on target 6D pose estimation |
CN114952809A (en) * | 2022-06-24 | 2022-08-30 | 中国科学院宁波材料技术与工程研究所 | Workpiece identification and pose detection method and system and grabbing control method of mechanical arm |
CN115620001A (en) * | 2022-12-15 | 2023-01-17 | 长春理工大学 | Visual auxiliary system based on 3D point cloud bilateral amplification algorithm |
CN115730236A (en) * | 2022-11-25 | 2023-03-03 | 杭州电子科技大学 | Drug identification acquisition method, device and storage medium based on man-machine interaction |
CN115922738A (en) * | 2023-03-09 | 2023-04-07 | 季华实验室 | Electronic component grabbing method, device, equipment and medium in stacking scene |
CN116330306A (en) * | 2023-05-31 | 2023-06-27 | 之江实验室 | Object grabbing method and device, storage medium and electronic equipment |
CN117104831A (en) * | 2023-09-01 | 2023-11-24 | 中信戴卡股份有限公司 | Robot 3D recognition and processing method and system for knuckle workpiece |
WO2024041392A1 (en) * | 2022-08-23 | 2024-02-29 | 北京有竹居网络技术有限公司 | Image processing method and apparatus, storage medium, and device |
CN118314531A (en) * | 2024-06-07 | 2024-07-09 | 浙江聿力科技有限公司 | Government service behavior pose monitoring management method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120121132A1 (en) * | 2009-05-12 | 2012-05-17 | Albert-Ludwigs University Freiburg | Object recognition method, object recognition apparatus, and autonomous mobile robot |
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
CN106530297A (en) * | 2016-11-11 | 2017-03-22 | 北京睿思奥图智能科技有限公司 | Object grabbing region positioning method based on point cloud registering |
CN108053449A (en) * | 2017-12-25 | 2018-05-18 | 北京工业大学 | Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
-
2018
- 2018-07-20 CN CN201810803444.7A patent/CN109102547A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120121132A1 (en) * | 2009-05-12 | 2012-05-17 | Albert-Ludwigs University Freiburg | Object recognition method, object recognition apparatus, and autonomous mobile robot |
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
CN106530297A (en) * | 2016-11-11 | 2017-03-22 | 北京睿思奥图智能科技有限公司 | Object grabbing region positioning method based on point cloud registering |
CN108053449A (en) * | 2017-12-25 | 2018-05-18 | 北京工业大学 | Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
Cited By (95)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112074868A (en) * | 2018-12-29 | 2020-12-11 | 河南埃尔森智能科技有限公司 | Industrial robot positioning method and device based on structured light, controller and medium |
CN109903331A (en) * | 2019-01-08 | 2019-06-18 | 杭州电子科技大学 | A kind of convolutional neural networks object detection method based on RGB-D camera |
CN109903331B (en) * | 2019-01-08 | 2020-12-22 | 杭州电子科技大学 | Convolutional neural network target detection method based on RGB-D camera |
CN110060257B (en) * | 2019-02-22 | 2022-11-25 | 叠境数字科技(上海)有限公司 | RGBD hair segmentation method based on different hairstyles |
CN110060257A (en) * | 2019-02-22 | 2019-07-26 | 叠境数字科技(上海)有限公司 | A kind of RGBD hair dividing method based on different hair styles |
CN109940616B (en) * | 2019-03-21 | 2022-06-03 | 佛山智能装备技术研究院 | Intelligent grabbing system and method based on brain-cerebellum mode |
CN109940616A (en) * | 2019-03-21 | 2019-06-28 | 佛山智能装备技术研究院 | One kind being based on brain-cerebella model intelligent grabbing system and method |
CN109978949B (en) * | 2019-03-26 | 2023-04-28 | 南开大学 | Crop identification and feature point three-dimensional coordinate extraction method based on computer vision |
CN109978949A (en) * | 2019-03-26 | 2019-07-05 | 南开大学 | A kind of method that crops identification based on computer vision is extracted with characteristic point three-dimensional coordinate |
CN111768449A (en) * | 2019-03-30 | 2020-10-13 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN111768449B (en) * | 2019-03-30 | 2024-05-14 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN110000783A (en) * | 2019-04-04 | 2019-07-12 | 上海节卡机器人科技有限公司 | Robotic vision grasping means and device |
CN113785303A (en) * | 2019-05-06 | 2021-12-10 | 库卡德国有限公司 | Machine learning object recognition by means of a robot-guided camera |
CN110276806A (en) * | 2019-05-27 | 2019-09-24 | 江苏大学 | Online hand-eye calibration and crawl pose calculation method for four-freedom-degree parallel-connection robot stereoscopic vision hand-eye system |
CN110232710A (en) * | 2019-05-31 | 2019-09-13 | 深圳市皕像科技有限公司 | Article localization method, system and equipment based on three-dimensional camera |
CN110276793A (en) * | 2019-06-05 | 2019-09-24 | 北京三快在线科技有限公司 | A kind of method and device for demarcating three-dimension object |
CN110246127A (en) * | 2019-06-17 | 2019-09-17 | 南京工程学院 | Workpiece identification and localization method and system, sorting system based on depth camera |
CN110298885A (en) * | 2019-06-18 | 2019-10-01 | 仲恺农业工程学院 | Stereoscopic vision identification method and positioning clamping detection device for non-smooth spheroid object and application of stereoscopic vision identification method and positioning clamping detection device |
CN110378325B (en) * | 2019-06-20 | 2022-03-15 | 西北工业大学 | Target pose identification method in robot grabbing process |
CN110378325A (en) * | 2019-06-20 | 2019-10-25 | 西北工业大学 | A kind of object pose recognition methods during robot crawl |
CN110287873A (en) * | 2019-06-25 | 2019-09-27 | 清华大学深圳研究生院 | Noncooperative target pose measuring method, system and terminal device based on deep neural network |
CN110322512A (en) * | 2019-06-28 | 2019-10-11 | 中国科学院自动化研究所 | In conjunction with the segmentation of small sample example and three-dimensional matched object pose estimation method |
CN110322515A (en) * | 2019-07-02 | 2019-10-11 | 工极智能科技(苏州)有限公司 | Workpiece identification and grabbing point extraction method based on binocular stereo vision |
CN110544279A (en) * | 2019-08-26 | 2019-12-06 | 华南理工大学 | pose estimation method combining image identification and genetic algorithm fine registration |
CN110555889A (en) * | 2019-08-27 | 2019-12-10 | 西安交通大学 | CALTag and point cloud information-based depth camera hand-eye calibration method |
CN110653820B (en) * | 2019-09-29 | 2022-11-01 | 东北大学 | Robot grabbing pose estimation method combined with geometric constraint |
CN110653820A (en) * | 2019-09-29 | 2020-01-07 | 东北大学 | Robot grabbing pose estimation method combined with geometric constraint |
CN110930452A (en) * | 2019-10-23 | 2020-03-27 | 同济大学 | Object pose estimation method based on self-supervision learning and template matching |
CN110930452B (en) * | 2019-10-23 | 2023-05-02 | 同济大学 | Object pose estimation method based on self-supervision learning and template matching |
US11288828B2 (en) | 2019-11-21 | 2022-03-29 | Industrial Technology Research Institute | Object recognition system based on machine learning and method thereof |
CN111080693A (en) * | 2019-11-22 | 2020-04-28 | 天津大学 | Robot autonomous classification grabbing method based on YOLOv3 |
CN110910452B (en) * | 2019-11-26 | 2023-08-25 | 上海交通大学 | Low-texture industrial part pose estimation method based on deep learning |
WO2021103824A1 (en) * | 2019-11-26 | 2021-06-03 | 广东技术师范大学 | Key point position determining method and device in robot hand-eye calibration based on calibration block |
CN110910452A (en) * | 2019-11-26 | 2020-03-24 | 上海交通大学 | Low-texture industrial part pose estimation method based on deep learning |
CN111127556B (en) * | 2019-11-29 | 2023-06-13 | 合刃科技(上海)有限公司 | Target object identification and pose estimation method and device based on 3D vision |
CN111127556A (en) * | 2019-11-29 | 2020-05-08 | 合刃科技(上海)有限公司 | Target object identification and pose estimation method and device based on 3D vision |
CN111216124A (en) * | 2019-12-02 | 2020-06-02 | 广东技术师范大学 | Robot vision guiding method and device based on integration of global vision and local vision |
CN110969660B (en) * | 2019-12-17 | 2023-09-22 | 浙江大学 | Robot feeding system based on three-dimensional vision and point cloud deep learning |
CN110969660A (en) * | 2019-12-17 | 2020-04-07 | 浙江大学 | Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning |
CN111145257B (en) * | 2019-12-27 | 2024-01-05 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
CN111178250B (en) * | 2019-12-27 | 2024-01-12 | 深圳市越疆科技有限公司 | Object identification positioning method and device and terminal equipment |
CN111178250A (en) * | 2019-12-27 | 2020-05-19 | 深圳市越疆科技有限公司 | Object identification positioning method and device and terminal equipment |
CN111145257A (en) * | 2019-12-27 | 2020-05-12 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
WO2021135321A1 (en) * | 2019-12-30 | 2021-07-08 | 苏宁云计算有限公司 | Object positioning method and apparatus, and computer system |
CN111127638A (en) * | 2019-12-30 | 2020-05-08 | 芜湖哈特机器人产业技术研究院有限公司 | Method for realizing positioning and grabbing point of protruding mark position of workpiece by using three-dimensional template library |
CN111259934A (en) * | 2020-01-09 | 2020-06-09 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111259934B (en) * | 2020-01-09 | 2023-04-07 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111222480B (en) * | 2020-01-13 | 2023-05-26 | 佛山科学技术学院 | Online grape weight estimation method and detection device based on deep learning |
CN111222480A (en) * | 2020-01-13 | 2020-06-02 | 佛山科学技术学院 | Grape weight online estimation method and detection device based on deep learning |
CN111347426B (en) * | 2020-03-26 | 2021-06-04 | 季华实验室 | Mechanical arm accurate placement track planning method based on 3D vision |
CN111347426A (en) * | 2020-03-26 | 2020-06-30 | 季华实验室 | Mechanical arm accurate placement track planning method based on 3D vision |
CN111524115B (en) * | 2020-04-17 | 2023-10-13 | 湖南视比特机器人有限公司 | Positioning method and sorting system for steel plate cutting piece |
CN111524115A (en) * | 2020-04-17 | 2020-08-11 | 湖南视比特机器人有限公司 | Positioning method and sorting system for steel plate cutting piece |
CN111652928B (en) * | 2020-05-11 | 2023-12-15 | 上海交通大学 | Object grabbing pose detection method in three-dimensional point cloud |
CN111652928A (en) * | 2020-05-11 | 2020-09-11 | 上海交通大学 | Method for detecting object grabbing pose in three-dimensional point cloud |
CN113232015A (en) * | 2020-05-27 | 2021-08-10 | 杭州中为光电技术有限公司 | Robot space positioning and grabbing control method based on template matching |
CN111815706B (en) * | 2020-06-23 | 2023-10-27 | 熵智科技(深圳)有限公司 | Visual identification method, device, equipment and medium for single-item unstacking |
CN111815706A (en) * | 2020-06-23 | 2020-10-23 | 熵智科技(深圳)有限公司 | Visual identification method, device, equipment and medium for single-article unstacking |
CN111784770A (en) * | 2020-06-28 | 2020-10-16 | 河北工业大学 | Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm |
CN111784770B (en) * | 2020-06-28 | 2022-04-01 | 河北工业大学 | Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm |
CN111881887A (en) * | 2020-08-21 | 2020-11-03 | 董秀园 | Multi-camera-based motion attitude monitoring and guiding method and device |
CN112215861A (en) * | 2020-09-27 | 2021-01-12 | 深圳市优必选科技股份有限公司 | Football detection method and device, computer readable storage medium and robot |
WO2022062238A1 (en) * | 2020-09-27 | 2022-03-31 | 深圳市优必选科技股份有限公司 | Football detection method and apparatus, and computer-readable storage medium and robot |
CN112476434A (en) * | 2020-11-24 | 2021-03-12 | 新拓三维技术(深圳)有限公司 | Visual 3D pick-and-place method and system based on cooperative robot |
CN112487960A (en) * | 2020-11-27 | 2021-03-12 | 同济大学 | Machine vision-based toilet bowl embryo in-vitro flexible bonding method and system |
CN112487960B (en) * | 2020-11-27 | 2023-02-10 | 同济大学 | Machine vision-based toilet bowl embryo in-vitro flexible bonding method and system |
CN112614179A (en) * | 2020-12-16 | 2021-04-06 | 南昌航空大学 | Automatic plate-type honeycomb ceramic assembling method based on vision |
CN112465903A (en) * | 2020-12-21 | 2021-03-09 | 上海交通大学宁波人工智能研究院 | 6DOF object attitude estimation method based on deep learning point cloud matching |
CN112790786A (en) * | 2020-12-30 | 2021-05-14 | 无锡祥生医疗科技股份有限公司 | Point cloud data registration method and device, ultrasonic equipment and storage medium |
CN112902966A (en) * | 2021-01-28 | 2021-06-04 | 开放智能机器(上海)有限公司 | Fusion positioning system and method |
CN113021355A (en) * | 2021-03-31 | 2021-06-25 | 重庆正格技术创新服务有限公司 | Agricultural robot operation method for predicting sheltered crop picking point |
CN113344769B (en) * | 2021-04-20 | 2024-06-14 | 梅卡曼德(北京)机器人科技有限公司 | Method, device and medium for acquiring 3D image information of article based on machine vision |
CN113344769A (en) * | 2021-04-20 | 2021-09-03 | 梅卡曼德(北京)机器人科技有限公司 | Method, device and medium for acquiring 3D image information of article based on machine vision |
CN113379849B (en) * | 2021-06-10 | 2023-04-18 | 南开大学 | Robot autonomous recognition intelligent grabbing method and system based on depth camera |
CN113379849A (en) * | 2021-06-10 | 2021-09-10 | 南开大学 | Robot autonomous recognition intelligent grabbing method and system based on depth camera |
CN113246145A (en) * | 2021-07-02 | 2021-08-13 | 杭州景业智能科技股份有限公司 | Pose compensation method and system for nuclear industry grabbing equipment and electronic device |
CN113610921A (en) * | 2021-08-06 | 2021-11-05 | 沈阳风驰软件股份有限公司 | Hybrid workpiece grabbing method, device and computer-readable storage medium |
CN113610921B (en) * | 2021-08-06 | 2023-12-15 | 沈阳风驰软件股份有限公司 | Hybrid workpiece gripping method, apparatus, and computer readable storage medium |
CN113781561B (en) * | 2021-09-09 | 2023-10-27 | 诺力智能装备股份有限公司 | Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram |
CN113781561A (en) * | 2021-09-09 | 2021-12-10 | 诺力智能装备股份有限公司 | Target pose estimation method based on self-adaptive Gaussian weight fast point feature histogram |
CN114155256A (en) * | 2021-10-21 | 2022-03-08 | 北京航空航天大学 | Method and system for tracking deformation of flexible object by using RGBD camera |
CN114155256B (en) * | 2021-10-21 | 2024-05-24 | 北京航空航天大学 | Method and system for tracking deformation of flexible object by using RGBD camera |
CN114663752B (en) * | 2022-02-28 | 2024-04-12 | 江苏大学 | Intelligent estimation method and system for yield of edible beans based on machine vision |
CN114663752A (en) * | 2022-02-28 | 2022-06-24 | 江苏大学 | Edible bean yield intelligent estimation method and system based on machine vision |
CN114912287A (en) * | 2022-05-26 | 2022-08-16 | 四川大学 | Robot autonomous grabbing simulation system and method based on target 6D pose estimation |
CN114952809A (en) * | 2022-06-24 | 2022-08-30 | 中国科学院宁波材料技术与工程研究所 | Workpiece identification and pose detection method and system and grabbing control method of mechanical arm |
WO2024041392A1 (en) * | 2022-08-23 | 2024-02-29 | 北京有竹居网络技术有限公司 | Image processing method and apparatus, storage medium, and device |
CN115730236B (en) * | 2022-11-25 | 2023-09-22 | 杭州电子科技大学 | Medicine identification acquisition method, equipment and storage medium based on man-machine interaction |
CN115730236A (en) * | 2022-11-25 | 2023-03-03 | 杭州电子科技大学 | Drug identification acquisition method, device and storage medium based on man-machine interaction |
CN115620001A (en) * | 2022-12-15 | 2023-01-17 | 长春理工大学 | Visual auxiliary system based on 3D point cloud bilateral amplification algorithm |
CN115922738A (en) * | 2023-03-09 | 2023-04-07 | 季华实验室 | Electronic component grabbing method, device, equipment and medium in stacking scene |
CN116330306B (en) * | 2023-05-31 | 2023-08-15 | 之江实验室 | Object grabbing method and device, storage medium and electronic equipment |
CN116330306A (en) * | 2023-05-31 | 2023-06-27 | 之江实验室 | Object grabbing method and device, storage medium and electronic equipment |
CN117104831A (en) * | 2023-09-01 | 2023-11-24 | 中信戴卡股份有限公司 | Robot 3D recognition and processing method and system for knuckle workpiece |
CN118314531A (en) * | 2024-06-07 | 2024-07-09 | 浙江聿力科技有限公司 | Government service behavior pose monitoring management method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102547A (en) | Robot based on object identification deep learning model grabs position and orientation estimation method | |
CN108717531B (en) | Human body posture estimation method based on Faster R-CNN | |
CN104850850B (en) | A kind of binocular stereo vision image characteristic extracting method of combination shape and color | |
CN110580723B (en) | Method for carrying out accurate positioning by utilizing deep learning and computer vision | |
CN109934847B (en) | Method and device for estimating posture of weak texture three-dimensional object | |
CN108229416B (en) | Robot SLAM method based on semantic segmentation technology | |
US11315264B2 (en) | Laser sensor-based map generation | |
CN112818925B (en) | Urban building and crown identification method | |
CN112184757B (en) | Method and device for determining motion trail, storage medium and electronic device | |
CN105023010A (en) | Face living body detection method and system | |
CN107944386B (en) | Visual scene recognition methods based on convolutional neural networks | |
CN107705322A (en) | Motion estimate tracking and system | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
JP2013050947A (en) | Method for object pose estimation, apparatus for object pose estimation, method for object estimation pose refinement and computer readable medium | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN113850865A (en) | Human body posture positioning method and system based on binocular vision and storage medium | |
CN109117755A (en) | A kind of human face in-vivo detection method, system and equipment | |
CN109886356A (en) | A kind of target tracking method based on three branch's neural networks | |
CN106846367B (en) | A kind of Mobile object detection method of the complicated dynamic scene based on kinematic constraint optical flow method | |
Zelener et al. | Cnn-based object segmentation in urban lidar with missing points | |
CN102289822A (en) | Method for tracking moving target collaboratively by multiple cameras | |
CN108256567A (en) | A kind of target identification method and system based on deep learning | |
CN104166995B (en) | Harris-SIFT binocular vision positioning method based on horse pace measurement | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
Gao et al. | Improved binocular localization of kiwifruit in orchard based on fruit and calyx detection using YOLOv5x for robotic picking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 200240 building 6, 646 Jianchuan Road, Minhang District, Shanghai Applicant after: SHANGHAI JAKA ROBOTICS Ltd. Address before: 200120 floor 1, building 1, No. 251, Yaohua Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai Applicant before: SHANGHAI JAKA ROBOTICS Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181228 |