CN111553891A - Handheld object existence detection method - Google Patents
Handheld object existence detection method Download PDFInfo
- Publication number
- CN111553891A CN111553891A CN202010326599.3A CN202010326599A CN111553891A CN 111553891 A CN111553891 A CN 111553891A CN 202010326599 A CN202010326599 A CN 202010326599A CN 111553891 A CN111553891 A CN 111553891A
- Authority
- CN
- China
- Prior art keywords
- depth
- camera
- hand
- image
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 6
- 238000000034 method Methods 0.000 claims abstract description 22
- 241000282414 Homo sapiens Species 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 9
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 210000002478 hand joint Anatomy 0.000 claims abstract description 5
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012634 optical imaging Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000007983 Tris buffer Substances 0.000 claims description 2
- 239000003086 colorant Substances 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 5
- 230000003993 interaction Effects 0.000 abstract description 2
- 238000011160 research Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 206010048245 Yellow skin Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of visual identification, and relates to a method for detecting existence of a handheld object. The sensor adopted by the method is a camera sensor integrating a color camera and a depth camera. The method comprises the steps of simultaneously acquiring color, depth and human skeleton information through a sensor, mapping human hand joint coordinate points to a depth image, extracting a hand mask region through a region growing method, mapping the hand mask region to a color image, and judging hand skin proportion based on an HSV threshold segmentation method so as to determine whether a held object exists. The invention judges whether a holding object exists or not in a visual identification mode, thereby providing a basis for judging the intention of human-computer interaction; the detection precision of the robot for human intentions can be greatly improved, and misjudgments are reduced.
Description
Technical Field
The invention belongs to the technical field of visual identification, and relates to a method for detecting existence of a handheld object.
Background
Since the first industrial robot was born one hundred and fifty years ago, efforts have been made to use robots to replace the heavy work of humans. It has gone through roughly three stages from the history of development. In the early stage, a first generation robot is called a teaching robot, teaching is mainly performed by an operator, and the teaching operation is continuously repeated by the robot; the second generation of robots are called as robots capable of perceiving external information, and mainly perceive information such as sight, touch, force and the like by configuring various sensors; the third generation robot is called an intelligent robot, is also a stage which is currently explored, can autonomously judge task requirements according to external environment information, and can freely interact with human beings.
Intelligent human-machine object transfer is an important ring in intelligent robots. In order to enable human beings to develop a transfer process with the robot, the robot needs to be capable of judging the intention of a human transferor, and whether a holding object exists in the hand of the human beings or not is detected, so that the detection precision of the robot on the intention of the human beings can be greatly improved, and misjudgment is reduced. At present, domestic research on the aspect is blank.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for detecting the existence of a handheld object.
The technical scheme adopted by the invention is as follows:
a hand-held object existence detection method adopts a camera sensor integrating a color camera and a depth camera. The method comprises the steps of simultaneously acquiring color, depth and human skeleton information through a sensor, mapping human hand joint coordinate points to a depth image, extracting a hand mask region through a region growing method, mapping the hand mask region to a color image, and judging hand skin proportion based on an HSV threshold segmentation method so as to determine whether a held object exists. The method specifically comprises the following steps:
(1) and acquiring the conversion relation between the depth camera and the color camera image.
And obtaining the internal parameters of the color camera and the depth camera and the external parameters of the corresponding checkerboard images by using a Zhangyou calibration method. Therefore, the pixel coordinate systems of the two cameras, the camera coordinate system and the world coordinate system are mutually linked to prepare for aligning subsequent images.
For optical imaging systems, there are image pixelsAnd the lower point of the camera coordinate systemThe switching relationship of (2) is shown in formula (1).
z·p=K·P (1)
Wherein K is a camera internal reference matrix,dx and dy represent the conversion relationship of the pixel points of each column and each row to the actual unit mm; f is the focal length of the camera; f. ofxF/dx and fyF/dy represents the scale factor of the camera in both horizontal and vertical directions, respectively; u. of0And v0Representing the horizontal and vertical offsets of the camera optical center and the origin of the pixel coordinate system, respectively.
The image pixel coordinate point p of the color camera can be obtained by the formula (1)rgbCoordinate point P of color camera coordinate systemrgbThe conversion relationship of (c) is shown in equation (2):
zrgb·prgb=Krgb·Prgb(2)
the image pixel coordinate point p of the depth camera can be obtained by the formula (1) in the same waydepth and coordinate point P of depth camera coordinate systemdept hThe conversion relationship of (c) is shown in equation (3):
zdept h·pdept h=Kdept h·Pdepth(3)
for the same checkerboard image, the external parameter R of the color camera can be obtainedCOAnd TCOAnd outer reference R of depth cameraDOAnd TDOFurther, the following relationship is obtained:
TCD=TCO-RCD·TDO(5)
for coordinate points P under respective camera coordinate systems under the nonhomogeneous coordinate systemrgbAnd Pdept hThere is a relationship as follows:
Prgb=RCD·Pdept h+TCD(6)
simultaneous formula (2), formula (3), formula (6) yields:
zrgb·prgb=Krgb·RCD·Kdept h -1·zdept h·pdept h+Krgb·TCD(7)
wherein z isrgb=zdept h. Then the equation (7) is the conversion relationship between the depth and the corresponding pixel coordinate system of the color image.
(2) Optical axes of the two cameras are parallel to the ground, the two cameras are installed on the robot platform, the distance between a human body and the cameras is 1-2.5m, the cameras are enabled to look directly at the hand position, attention is paid to the fact that the hand is not shielded by other parts of the body, and color images and depth image data are collected.
(3) And (5) image preprocessing. And carrying out Gaussian filtering on the depth image data, and filling the lost depth points. And converting the color image into an HSV color space, acquiring an HSV image, and performing Gaussian filtering processing on the HSV image.
(4) Reading the recognized hand joints of the human body by using a skeleton recognition program, and acquiring hand coordinates PhandWherein u and v representThe coordinates of the hand in the pixel coordinate system of the depth camera, and z represents the depth corresponding to the joint.
(5) Setting PhandFor the seed point, the depth value is iteratively traversed in the depth image by adopting a region growing method in the z-Tl,z+Tr]Coordinate points within a range, wherein TlIs the upper boundary of the segmentation threshold, TrIs the lower boundary of the segmentation threshold. And recording all growth coordinate points to obtain the hand related area mask. The segmentation threshold is set through manual adjustment, and the hand region can be clearly segmented by the hand related region mask.
(6) Mapping the mask of the hand related region obtained in the step (5) onto an HSV image to obtain the HSV image of the hand related region, traversing the region and integrating to obtain the region area Sall(ii) a Meanwhile, the HSV color threshold of the hand skin is set according to the specific skin color condition, HSV images of the hand relevant area are traversed, the part in the skin color threshold interval is integrated, and the skin integral area S is obtainedskin。
(7) Calculating hand skin scale factorJudging the existence of the handheld object according to a preset proportion threshold value S, and judging when S is<And S is regarded as the existence of the handheld object, and otherwise, the handheld object does not exist. The range of the proportional threshold S is [0.4,0.7]]The specific threshold value needs to be adjusted according to the actual effect.
Furthermore, the setting mode of the HSV hand color threshold in the step (6) is various, a default threshold can be directly set, the HSV hand color threshold can be set according to the identified human face skin color interval, or the HSV hand color threshold is limited by using gloves with special colors to improve the identification accuracy.
The invention has the beneficial effects that: in order to fill the blank of the existing research, the method for detecting the existence of the handheld object is innovatively provided, and whether the held object exists or not is judged in a visual identification mode, so that a basis is provided for judging the human-computer interaction intention.
Drawings
Fig. 1 is a program flow diagram.
Fig. 2 is a hand area mask image.
Fig. 3 is a hand segmentation image.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The specific implementation process of the invention adopts a camera sensor integrating a color camera and a depth camera, can acquire color and depth images in real time, and has a built-in program capable of extracting bone coordinate points. The effective visual angle is 70 degrees in the horizontal direction, 60 degrees in the vertical direction, the effective depth range is 0.5-4.5m, the frame rate is 30FPS, and the depth image resolution is 512 x 424.
A method for detecting existence of a handheld object mainly comprises the following steps as shown in figure 1:
(1) and acquiring the conversion relation between the depth camera and the color camera image. And obtaining the internal parameters of the color camera and the depth camera and the external parameters of the corresponding checkerboard images by using a Zhangyou calibration method. Therefore, the pixel coordinate systems of the two cameras, namely the camera coordinate systems and the world coordinate systems are mutually linked to prepare for aligning subsequent images. For optical imaging systems, there are image pixelsAnd the lower point of the camera coordinate systemThe switching relationship of (2) is shown in formula (1).
z·p=K·P (1)
Wherein K is a camera internal reference matrix,dx and dy represent the conversion of the pixel points of each column and each row to the actual unit mm, f is the focal length of the camera, fxF/dx and fyF/dy denotes the scale factor of the camera in both horizontal and vertical directions, u, respectively0And v0Representing the horizontal and vertical offsets of the camera optical center and the origin of the pixel coordinate system, respectively.
Obtaining the image pixel coordinate point p of the color camera according to (1)rgbAnd a point P under a color camera coordinate systemrgbThe conversion relationship of (2) is shown.
zrgb·prgb=Krgb·Prgb(2)
The same reason (1) is that the image pixel coordinate point p of the depth camera can be obtaineddept hCoordinate point P of camera coordinate system of depth cameradept hThe conversion relationship of (2) is shown in (3).
zdept h·pdept h=Kdept h·Pdept h(3)
For the same checkerboard image, the external parameter R of the color camera can be obtainedCOAnd TCOAnd outer reference R of depth cameraDOAnd TDOThe relationship between the two can be found as follows:
TCD=TCO-RCD·TDO(5)
for coordinate points P under respective camera coordinate systems under the nonhomogeneous coordinate systemrgbAnd Pdept hThere is a relationship as follows:
Prgb=RCD·Pdept h+TCD(6)
the simultaneous formulas (2), (3) and (6) are as follows:
zrgb·prgb=Krgb·RCD·Kdept h -1·zdept h·pdept h+Krgb·TCD(7)
wherein z isrgb=zdept h. Then the equation (7) is the conversion relationship between the depth and the corresponding pixel coordinate system of the color image.
(2) The camera optical axis is parallel to the ground and is installed on the robot platform, and the human body is within 2m from the camera, so that the camera can look directly at the hand position, the attention hand is not shielded by other parts of the body, and color images and depth image data are collected.
(3) And (5) image preprocessing. The depth image is gaussian filtered, and 5 x 5 gaussian kernels are selected to fill in the missing depth points. And converting the color image into an HSV color space to obtain an HSV image, and selecting a 3 multiplied by 3 Gaussian core to perform Gaussian filtering processing on the HSV image.
(4) Reading the recognized hand joints of the human body by using a skeleton recognition program, and acquiring hand coordinates PhandAnd (u, v, z), wherein u and v represent coordinates of the hand coordinate in a pixel coordinate system of the depth camera, and z represents the depth corresponding to the joint.
(5) Setting PhandFor the seed point, the depth value is iteratively traversed in the depth image by adopting a region growing method in the z-Tl,z+Tr]Coordinate points within a range, wherein Tl=20mm,Tr20 mm. And recording all growth coordinate points to obtain a mask of a hand related area, wherein the image of the mask is as shown in figure 2.
(6) The mask of the hand relevant region is mapped to an HSV image to obtain the HSV image of the hand relevant region, the region is traversed and integrated to obtain the region area Sall(ii) a Meanwhile, the HSV color threshold of the hand skin is set according to the skin color of the yellow skin, HSV images of the relevant areas of the hand are traversed, the parts in the skin color threshold interval are integrated, and the skin integration area S is obtainedskin。
(7) Calculating hand skin scale factorJudging the existence of the handheld object according to the preset proportion threshold value S being 0.55 when S is equal to<And S is regarded as the existence of the handheld object, and otherwise, the handheld object does not exist.
The above-mentioned embodiments only express the embodiments of the present invention, but not should be understood as the limitation of the scope of the invention patent, it should be noted that, for those skilled in the art, many variations and modifications can be made without departing from the concept of the present invention, and these all fall into the protection scope of the present invention.
Claims (3)
1. A method for detecting the presence of a hand-held object, comprising the steps of:
(1) acquiring a conversion relation between a depth camera and a color camera image;
firstly, acquiring internal parameters of a color camera and a depth camera and external parameters of a corresponding checkerboard image, and further establishing a relationship between a pixel coordinate system, a camera coordinate system and a world coordinate system of the two cameras;
for optical imaging systems, there are image pixelsAnd the lower point of the camera coordinate systemThe switching relationship of (a) is shown in formula (1);
z·p=K·P (1)
wherein K is a camera internal reference matrix,dx and dy represent the conversion relationship of the pixel points of each column and each row to the actual unit mm; f is the focal length of the camera; f. ofxF/dx and fyF/dy represents the scale factor of the camera in both horizontal and vertical directions, respectively; u. of0And v0Respectively representing the offset of the optical center of the camera and the origin of the pixel coordinate system in the horizontal and vertical directions;
obtaining the image pixel coordinate point p of the color camera by the formula (1)rgbCoordinate point P of color camera coordinate systemrgbThe conversion relationship of (c) is shown in equation (2):
zrgb·prgb=Krgb·Prgb(2)
the image pixel coordinate point P of the depth camera obtained by the formula (1) is similar to the image pixel coordinate point PdepthCoordinate point P of coordinate system of depth cameradepthThe conversion relationship of (c) is shown in equation (3):
zdepth·pdepth=Kdepth·Pdepth(3)
obtaining the external parameter R of the color camera for the same checkerboard imageCOAnd TCOAnd outer reference R of depth cameraDOAnd TDOFurther, the following relationship is obtained:
TCD=TCO-RCD·TDO(5)
for coordinate points P under respective camera coordinate systems under the nonhomogeneous coordinate systemrgbAnd PdepthThere is a relationship as follows:
Prgb=RCD·Pdepth+TCD(6)
simultaneous formula (2), formula (3), formula (6) yields:
zrgb·prgb=Krgb·RCD·Kdepth -1·zdepth·pdepth+Krgb·TCD(7)
wherein z isrgb=zdepth(ii) a The formula (7) is the conversion relation between the depth and the pixel coordinate system corresponding to the color image;
(2) optical axes of the two cameras are parallel to the ground, the two cameras are installed on a robot platform, the distance between a human body and the cameras is 1-2.5m, the cameras are enabled to look directly at the position of a hand, the hand is not shielded by other parts, and color images and depth image data are collected;
(3) preprocessing an image; carrying out Gaussian filtering on the depth image data, and filling lost depth points; converting the color image into an HSV color space, acquiring an HSV image, and performing Gaussian filtering processing on the HSV image;
(4) reading the recognized hand joints of the human body by using a skeleton recognition program, and acquiring hand coordinates PhandThe joint is divided into a plurality of joints, wherein the joints are located in the same pixel coordinate system of the depth camera, and the joints are located in the same pixel coordinate system of the depth camera;
(5) setting PhandFor the seed point, the depth value is iteratively traversed in the depth image by adopting a region growing method in the z-Tl,z+Tr]Coordinate points within a range, wherein TlIs the upper boundary of the segmentation threshold, TrIs the lower boundary of the segmentation threshold; recording all growth coordinate points to obtain a hand related area mask; the segmentation threshold is set through manual adjustment, so that the hand region can be clearly segmented by the hand related region mask;
(6) mapping the mask of the hand related region obtained in the step (5) onto an HSV image to obtain the HSV image of the hand related region, traversing the region and integrating to obtain the region area Sall(ii) a Meanwhile, the HSV color threshold of the hand skin is set according to the specific skin color condition, HSV images of the hand relevant area are traversed, the part in the skin color threshold interval is integrated, and the skin integral area S is obtainedskin;
2. A method as claimed in claim 1, wherein said scale threshold S is in the range [0.4,0.7 ].
3. A hand-held object presence detection method according to claim 1 or 2, wherein the HSV hand color threshold in step (6) is set by: the default threshold value is directly set, or the default threshold value is set according to the recognized human face skin color interval, or the hand color is limited by using gloves with special colors so as to improve the recognition accuracy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010326599.3A CN111553891B (en) | 2020-04-23 | 2020-04-23 | Handheld object existence detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010326599.3A CN111553891B (en) | 2020-04-23 | 2020-04-23 | Handheld object existence detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111553891A true CN111553891A (en) | 2020-08-18 |
CN111553891B CN111553891B (en) | 2022-09-27 |
Family
ID=72001591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010326599.3A Active CN111553891B (en) | 2020-04-23 | 2020-04-23 | Handheld object existence detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111553891B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113128435A (en) * | 2021-04-27 | 2021-07-16 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470339A (en) * | 2018-03-21 | 2018-08-31 | 华南理工大学 | A kind of visual identity of overlapping apple and localization method based on information fusion |
CN110648367A (en) * | 2019-08-15 | 2020-01-03 | 大连理工江苏研究院有限公司 | Geometric object positioning method based on multilayer depth and color visual information |
-
2020
- 2020-04-23 CN CN202010326599.3A patent/CN111553891B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470339A (en) * | 2018-03-21 | 2018-08-31 | 华南理工大学 | A kind of visual identity of overlapping apple and localization method based on information fusion |
CN110648367A (en) * | 2019-08-15 | 2020-01-03 | 大连理工江苏研究院有限公司 | Geometric object positioning method based on multilayer depth and color visual information |
Non-Patent Citations (1)
Title |
---|
黄朝美等: "基于信息融合的移动机器人目标识别与定位", 《计算机测量与控制》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113128435A (en) * | 2021-04-27 | 2021-07-16 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
CN113128435B (en) * | 2021-04-27 | 2022-11-22 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
Also Published As
Publication number | Publication date |
---|---|
CN111553891B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035200B (en) | Bolt positioning and pose detection method based on single-eye and double-eye vision cooperation | |
CN107767423B (en) | mechanical arm target positioning and grabbing method based on binocular vision | |
CN109297413B (en) | Visual measurement method for large-scale cylinder structure | |
CN110189314B (en) | Automobile instrument panel image positioning method based on machine vision | |
CN107677274B (en) | Unmanned plane independent landing navigation information real-time resolving method based on binocular vision | |
CN111721259B (en) | Underwater robot recovery positioning method based on binocular vision | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
CN105528789B (en) | Robot visual orientation method and device, vision calibration method and device | |
US20100119146A1 (en) | Robot system, robot control device and method for controlling robot | |
US10922824B1 (en) | Object tracking using contour filters and scalers | |
CN111897349A (en) | Underwater robot autonomous obstacle avoidance method based on binocular vision | |
CN109940626B (en) | Control method of eyebrow drawing robot system based on robot vision | |
CN112132874B (en) | Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium | |
CN111784655B (en) | Underwater robot recycling and positioning method | |
CN112906797A (en) | Plane grabbing detection method based on computer vision and deep learning | |
Momeni-k et al. | Height estimation from a single camera view | |
WO2022036478A1 (en) | Machine vision-based augmented reality blind area assembly guidance method | |
CN112053392A (en) | Rapid registration and fusion method for infrared and visible light images | |
CN108074265A (en) | A kind of tennis alignment system, the method and device of view-based access control model identification | |
CN111553891B (en) | Handheld object existence detection method | |
Song et al. | Navigation algorithm based on semantic segmentation in wheat fields using an RGB-D camera | |
CN116236222A (en) | Ultrasonic probe pose positioning system and method of medical remote ultrasonic scanning robot | |
TWI274845B (en) | Equipment for detecting the object corner and distance using a sole lens | |
CN113863966A (en) | Segment grabbing pose detection device and detection method based on deep learning vision | |
CN113240751B (en) | Calibration method for robot tail end camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |