CN111553891A - Handheld object existence detection method - Google Patents
Handheld object existence detection method Download PDFInfo
- Publication number
- CN111553891A CN111553891A CN202010326599.3A CN202010326599A CN111553891A CN 111553891 A CN111553891 A CN 111553891A CN 202010326599 A CN202010326599 A CN 202010326599A CN 111553891 A CN111553891 A CN 111553891A
- Authority
- CN
- China
- Prior art keywords
- depth
- camera
- hand
- image
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 4
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 210000002478 hand joint Anatomy 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims abstract 3
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012634 optical imaging Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000007983 Tris buffer Substances 0.000 claims 1
- 239000003086 colorant Substances 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000003993 interaction Effects 0.000 abstract description 2
- 238000012546 transfer Methods 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于视觉识别的技术领域,涉及到一种手持物体存在的检测方法。The invention belongs to the technical field of visual recognition, and relates to a method for detecting the existence of a hand-held object.
背景技术Background technique
从一百五十年前第一台工业机器人诞生开始,人们一直致力于使用机器人来代替人类繁重的工作。其从发展历史上看,大致经历了三个阶段。早期阶段,第一代机器人称为示教机器人,主要通过操作者进行示教,并让机器人不断重复示教动作;第二代机器人称为可感知外界信息的机器人,他主要通过配置各种传感器,来对视、触、力等信息进行感知;第三代机器人称为智能机器人,也是目前正在探索的一个阶段,它能够根据外接环境信息自主判断任务需求,并与人类自由交互。Since the birth of the first industrial robot 150 years ago, people have been working on using robots to replace the heavy work of humans. From the perspective of development history, it has roughly gone through three stages. In the early stage, the first-generation robot is called a teaching robot, which is mainly taught by the operator, and the robot is repeatedly taught to repeat the teaching action; the second-generation robot is called a robot that can perceive external information, and it mainly configures various sensors by configuring various sensors. , to perceive information such as sight, touch, force, etc. The third-generation robot is called intelligent robot, which is also a stage currently being explored. It can independently judge task requirements according to external environmental information and freely interact with humans.
智能人机物体传递是作为智能机器人中重要的一环。想要让人类能够与机器人开展传递过程,需要让机器人能够对人类传递者的意图进行判断,而检测人类手中是否存在握持物体,则可以极大的提高机器人的对人类意图的检测精度,减少误判。而目前国内在此方面的研究尚属空白。Intelligent human-machine object transfer is an important part of intelligent robots. In order for humans to be able to carry out the transfer process with robots, it is necessary for the robot to be able to judge the intention of the human transmitter, and to detect whether there is an object in the human hand can greatly improve the robot's detection accuracy of human intention, reduce misjudgment. At present, the domestic research in this area is still blank.
发明内容SUMMARY OF THE INVENTION
针对现有技术存在的问题,本发明提出一种手持物体存在检测方法。In view of the problems existing in the prior art, the present invention proposes a method for detecting the existence of a handheld object.
本发明采用的技术方案为:The technical scheme adopted in the present invention is:
一种手持物体存在检测方法,该方法采用的传感器为彩色摄像头与深度摄像头集成一体的摄像头传感器。通过传感器同时获取彩色、深度以及人体骨骼信息,并将人体手部关节坐标点映射到深度图像上,通过区域生长法提取手部掩膜区域,再将其映射到彩色图像上,基于HSV阈值分割方法判断手部皮肤占比,从而确认是否存在握持物体。具体包括以下步骤:A method for detecting the existence of a hand-held object, the sensor used in the method is a camera sensor integrating a color camera and a depth camera. Simultaneously acquire color, depth and human skeleton information through the sensor, map the joint coordinates of the human hand to the depth image, extract the hand mask area by the region growing method, and then map it to the color image, based on HSV threshold segmentation The method judges the proportion of the skin of the hand, so as to confirm whether there is a holding object. Specifically include the following steps:
(1)获取深度摄像头与彩色摄像头图像的转换关系。(1) Obtain the conversion relationship between the depth camera and the color camera image.
使用张正友标定法获取彩色摄像头、深度摄像头的内参及对应棋盘格图像的外参。从而将两个摄像头的像素坐标系-相机坐标系-世界坐标系建立相互之间的联系,为后续图像对齐做准备。Use Zhang Zhengyou's calibration method to obtain the internal parameters of the color camera, the depth camera and the external parameters of the corresponding checkerboard image. In this way, the pixel coordinate system-camera coordinate system-world coordinate system of the two cameras is connected to each other to prepare for subsequent image alignment.
对于光学成像体系,存在图像像素点与相机坐标系下点的转接关系如公式(1)所示。For optical imaging systems, there are image pixels point under the camera coordinate system The transfer relationship of , is shown in formula (1).
z·p=K·P (1)z·p=K·P (1)
其中,K为相机内参矩阵,dx和dy代表每一列和每一行的像素点与实际单位mm的转换关系;f为相机焦距;fx=f/dx和fy=f/dy分别表示相机在水平和竖直两个方向上的尺度因子;u0和v0分别代表相机光心与像素坐标系原点在水平及竖直方向上的偏移量。Among them, K is the camera intrinsic parameter matrix, dx and dy represent the conversion relationship between the pixels of each column and each row and the actual unit mm; f is the focal length of the camera; f x =f/dx and f y =f/dy represent the camera in the horizontal and vertical directions, respectively The scale factor of ; u 0 and v 0 represent the offset of the camera optical center and the origin of the pixel coordinate system in the horizontal and vertical directions, respectively.
由公式(1)可得彩色摄像头的图像像素坐标点prgb与彩色相机坐标系坐标点Prgb的转换关系如公式(2)所示:From the formula (1), the conversion relationship between the pixel coordinate point p rgb of the color camera and the coordinate point P rgb of the color camera coordinate system can be obtained as shown in the formula (2):
zrgb·prgb=Krgb·Prgb (2)z rgb · p rgb = K rgb · P rgb (2)
同理由公式(1)可得深度摄像头的图像像素坐标点pdepth与深度相机坐标系坐标点Pdept h的转换关系如公式(3)所示:For the same reason, the conversion relationship between the image pixel coordinate point p dept h of the depth camera and the coordinate point P dept h of the depth camera coordinate system can be obtained by formula (1), as shown in formula (3):
zdept h·pdept h=Kdept h·Pdepth (3)z dept h ·p dept h =K dept h ·P depth (3)
对于同一个棋盘格图像,可得到彩色摄像头的外参RCO和TCO,以及深度摄像头的外参RDO和TDO,进而求得两者关系如下:For the same checkerboard image, the extrinsic parameters R CO and T CO of the color camera and the extrinsic parameters R DO and T DO of the depth camera can be obtained, and the relationship between the two can be obtained as follows:
TCD=TCO-RCD·TDO (5)T CD =T CO -R CD ·T DO (5)
对于非齐次坐标系下各自相机坐标系下的坐标点Prgb与Pdept h有关系如下:For the coordinate points P rgb and P dept h in the respective camera coordinate systems in the inhomogeneous coordinate system, the relationship is as follows:
Prgb=RCD·Pdept h+TCD (6)P rgb = R CD · P dept h + T CD (6)
联立公式(2)、公式(3)、公式(6),得到:Combine formula (2), formula (3), and formula (6) to get:
zrgb·prgb=Krgb·RCD·Kdept h -1·zdept h·pdept h+Krgb·TCD (7)z rgb · p rgb = K rgb · R CD · K dept h -1 · z dept h · p dept h +K rgb · T CD (7)
其中,zrgb=zdept h。则该公式(7)为深度与彩色图像对应像素坐标系的转换关系。where z rgb =z dept h . Then the formula (7) is the conversion relationship between the depth and the pixel coordinate system corresponding to the color image.
(2)将两个摄像头的光轴平行于地面,并将两个摄像头安装于机器人平台上,人体距离摄像头1-2.5m范围内,让摄像头直视手部位置,注意手部不要被身体其它部位遮挡,采集彩色图像与深度图像数据。(2) The optical axes of the two cameras are parallel to the ground, and the two cameras are installed on the robot platform. The distance between the human body and the camera is within 1-2.5m. Let the camera look directly at the position of the hand. Part occlusion, collect color image and depth image data.
(3)图像预处理。对深度图像数据进行高斯滤波,填充丢失的深度点。将彩色图像转换到HSV颜色空间,获取HSV图像,并对其进行高斯滤波处理。(3) Image preprocessing. Gaussian filtering is performed on the depth image data to fill in the missing depth points. Convert the color image to the HSV color space, obtain the HSV image, and perform Gaussian filtering on it.
(4)使用骨骼识别程序读取识别到的人体手部关节,获取手部坐标Phand=(u,v,z),其中,u、v代表手部坐标在深度摄像头的像素坐标系的坐标,z代表该关节对应的深度。(4) Use the skeleton recognition program to read the recognized human hand joints, and obtain the hand coordinates P hand = (u, v, z), where u, v represent the coordinates of the hand coordinates in the pixel coordinate system of the depth camera , z represents the depth corresponding to the joint.
(5)设定Phand为种子点,在深度图像中采用区域生长法迭代遍历深度值在[z-Tl,z+Tr]范围内的坐标点,其中,Tl是分割阈值上界,Tr是分割阈值下界。并记录所有生长坐标点,获得手部相关区域掩膜。所述分割阈值通过人为调整设定,确保手部相关区域掩膜能够将手部区域清晰分割。(5) Set P hand as the seed point, and use the region growing method in the depth image to iteratively traverse the coordinate points whose depth values are in the range of [zT l , z+T r ], where T l is the upper bound of the segmentation threshold, T r is the segmentation threshold lower bound. And record all the growth coordinate points to obtain the hand-related area mask. The segmentation threshold is set by manual adjustment to ensure that the hand-related region mask can clearly segment the hand region.
(6)将步骤(5)得到的手部相关区域掩膜映射到HSV图像上,获得手部相关区域的HSV图像,遍历该区域并进行积分,获得区域面积Sall;同时根据皮肤颜色具体情况设定手部皮肤的HSV颜色阈值,遍历手部相关区域的HSV图像,将处于皮肤颜色阈值区间的部分进行积分,获得皮肤积分面积Sskin。(6) the hand-related area mask obtained in step (5) is mapped on the HSV image, obtains the HSV image of the hand-related area, traverses the area and integrates to obtain the area area S all ; Simultaneously according to the skin color specific situation Set the HSV color threshold of the hand skin, traverse the HSV images of the relevant areas of the hand, and integrate the part in the skin color threshold range to obtain the skin integral area S skin .
(7)求取手部皮肤比例因子根据预设的比例阈值S进行手持物体存在判断,当s<S时视为存在手持物体,反之则不存在。所述的比例阈值S范围[0.4,0.7],具体阈值需根据实际效果调整。(7) Obtain the scale factor of hand skin According to the preset ratio threshold S, the existence of the hand-held object is judged. When s<S, it is considered that the hand-held object exists, otherwise it does not exist. The proportional threshold S is in the range of [0.4, 0.7], and the specific threshold needs to be adjusted according to the actual effect.
进一步的,所述步骤(6)中手部皮肤HSV颜色阈值的设定方式为多种,可以直接设定默认阈值,也可以根据识别到的人体面部皮肤颜色区间作为依据进行设定,或采用专用颜色的手套来限定手部颜色以提高识别准确率。Further, in the step (6), there are various ways of setting the HSV color threshold of the skin of the hand, and the default threshold can be directly set, or set according to the recognized human face skin color interval as a basis, or use Special color gloves are used to limit the color of the hand to improve the recognition accuracy.
本发明的有益效果为:为填补现有研究空白,创新性的提出了一种手持物体存在检测方法,通过视觉识别的方式对是否存在握持物体进行判断,从而为人机交互中意图判断提供依据。The beneficial effects of the present invention are as follows: in order to fill the existing research gap, a method for detecting the existence of a hand-held object is innovatively proposed, and the existence of a held object is judged by means of visual recognition, so as to provide a basis for the judgment of intention in human-computer interaction .
附图说明Description of drawings
图1是程序流程图。Figure 1 is a flow chart of the program.
图2是手部区域掩膜图像。Figure 2 is a hand region mask image.
图3是手部分割图像。Figure 3 is a hand segmented image.
具体实施方式Detailed ways
下面结合附图和实例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and examples.
本发明的具体实施过程采用的是彩色摄像头与深度摄像头集成一体的摄像头传感器,它能够实时获取彩色与深度图像,并具备内置程序可以提取骨骼坐标点。其有效视角为水平方向70°,垂直方向60°,有效深度范围为0.5-4.5m,帧率为30FPS,深度图像分辨率为512*424。The specific implementation process of the present invention adopts a camera sensor integrating a color camera and a depth camera, which can acquire color and depth images in real time, and has a built-in program to extract bone coordinate points. Its effective viewing angle is 70° in the horizontal direction and 60° in the vertical direction, the effective depth range is 0.5-4.5m, the frame rate is 30FPS, and the depth image resolution is 512*424.
一种手持物体存在检测方法,主要流程如图1,包括如下步骤:A method for detecting the presence of a handheld object, the main process is shown in Figure 1, including the following steps:
(1)获取深度摄像头与彩色摄像头图像的转换关系。使用张正友标定法获取彩色摄像头、深度摄像头的内参及对应棋盘格图像的外参。从而将两个摄像头的像素坐标系——相机坐标系——世界坐标系建立起相互之间的联系,为后续图像对齐准备。对于光学成像体系,存在图像像素点与相机坐标系下点的转接关系如公式(1)所示。(1) Obtain the conversion relationship between the depth camera and the color camera image. Use Zhang Zhengyou's calibration method to obtain the internal parameters of the color camera, the depth camera and the external parameters of the corresponding checkerboard image. In this way, the pixel coordinate system of the two cameras—the camera coordinate system—the world coordinate system is connected to each other to prepare for subsequent image alignment. For optical imaging systems, there are image pixels point under the camera coordinate system The transfer relationship of , is shown in formula (1).
z·p=K·P (1)z·p=K·P (1)
其中,K为相机内参矩阵,dx和dy代表每一列和每一行的像素点与实际单位mm的转换关系,f为相机焦距,fx=f/dx和fy=f/dy分别表示相机在水平和竖直两个方向上的尺度因子,u0和v0分别代表相机光心与像素坐标系原点在水平及竖直方向上的偏移量。Among them, K is the camera intrinsic parameter matrix, dx and dy represent the conversion relationship between the pixels of each column and each row and the actual unit mm, f is the focal length of the camera, f x =f/dx and f y =f/dy represent the camera in the horizontal and vertical directions, respectively The scale factor of , u 0 and v 0 represent the offset of the camera optical center and the origin of the pixel coordinate system in the horizontal and vertical directions, respectively.
由(1)可得彩色摄像头的图像像素坐标点prgb与彩色摄像头相机坐标系下点Prgb的转换关系如(2)所示。From (1), the conversion relationship between the pixel coordinate point p rgb of the color camera and the point P rgb in the camera coordinate system of the color camera can be obtained as shown in (2).
zrgb·prgb=Krgb·Prgb (2)z rgb · p rgb = K rgb · P rgb (2)
同理由(1)可得深度摄像头的图像像素坐标点pdept h与深度摄像头的相机坐标系坐标点Pdept h的转换关系如(3)所示。For the same reason (1), the conversion relationship between the image pixel coordinate point p dept h of the depth camera and the camera coordinate system coordinate point P dept h of the depth camera can be obtained as shown in (3).
zdept h·pdept h=Kdept h·Pdept h (3)z dept h ·p dept h =K dept h ·P dept h (3)
对于同一个棋盘格图像,可得彩色相机的外参RCO和TCO,以及深度摄像头的外参RDO和TDO,可以求得两者关系如下:For the same checkerboard image, the extrinsic parameters R CO and T CO of the color camera and the extrinsic parameters R DO and T DO of the depth camera can be obtained. The relationship between the two can be obtained as follows:
TCD=TCO-RCD·TDO (5)T CD =T CO -R CD ·T DO (5)
对于非齐次坐标系下各自相机坐标系下的坐标点Prgb与Pdept h有关系如下:For the coordinate points P rgb and P dept h in the respective camera coordinate systems in the inhomogeneous coordinate system, the relationship is as follows:
Prgb=RCD·Pdept h+TCD (6)P rgb = R CD · P dept h + T CD (6)
联立(2)(3)(6)式有:Simultaneous equations (2)(3)(6) are:
zrgb·prgb=Krgb·RCD·Kdept h -1·zdept h·pdept h+Krgb·TCD (7)z rgb · p rgb = K rgb · R CD · K dept h -1 · z dept h · p dept h +K rgb · T CD (7)
其中,zrgb=zdept h。则该公式(7)为深度与彩色图像对应像素坐标系的转换关系。where z rgb =z dept h . Then the formula (7) is the conversion relationship between the depth and the pixel coordinate system corresponding to the color image.
(2)摄像头光轴平行于地面,安装于机器人平台上,人体距离相机2m范围内,让摄像头直视手部位置,注意手部不要被身体其它部位遮挡,采集彩色图像与深度图像数据。(2) The optical axis of the camera is parallel to the ground, installed on the robot platform, and the human body is within 2m from the camera. Let the camera look directly at the position of the hand, pay attention not to be blocked by other parts of the body, and collect color image and depth image data.
(3)图像预处理。对深度图像进行高斯滤波,选择5×5的高斯核,填充丢失的深度点。将彩色图像转换为HSV颜色空间,获得HSV图像,并选择3×3的高斯核对HSV图像进行高斯滤波处理。(3) Image preprocessing. Perform Gaussian filtering on the depth image, select a 5×5 Gaussian kernel, and fill in the missing depth points. Convert the color image to the HSV color space to obtain the HSV image, and select a 3×3 Gaussian kernel to perform Gaussian filtering on the HSV image.
(4)使用骨骼识别程序读取识别到的人体手部关节,获取手部坐标Phand=(u,v,z),其中u、v代表手部坐标在深度相机的像素坐标系的坐标,z代表该关节对应的深度。(4) Use the bone recognition program to read the recognized human hand joints, and obtain the hand coordinates P hand = (u, v, z), where u, v represent the coordinates of the hand coordinates in the pixel coordinate system of the depth camera, z represents the depth corresponding to this joint.
(5)设定Phand为种子点,在深度图像中采用区域生长法迭代遍历深度值在[z-Tl,z+Tr]范围内的坐标点,其中,Tl=20mm,Tr=20mm。并记录所有生长坐标点,获得手部相关区域掩膜,掩模图像如图2。(5) Set P hand as the seed point, and use the region growing method in the depth image to iteratively traverse the coordinate points whose depth values are in the range of [zT l , z+T r ], where T l =20mm, T r =20mm . And record all the growth coordinate points to obtain the hand-related area mask, the mask image is shown in Figure 2.
(6)将该手部相关区域掩膜映射到HSV图像上,获得手部相关区域的HSV图像,遍历该区域并进行积分,获得区域面积Sall;同时按照黄种人皮肤颜色设定手部皮肤的HSV颜色阈值,遍历手部相关区域的HSV图像,将处于皮肤颜色阈值区间的部分进行积分,获得皮肤积分面积Sskin。(6) this hand-related area mask is mapped on the HSV image, obtain the HSV image of the hand-related area, traverse this area and carry out integration, obtain the area area S all ; Set hand skin according to yellow race skin color simultaneously The HSV color threshold of the hand is traversed through the HSV image of the relevant area of the hand, and the part in the skin color threshold range is integrated to obtain the skin integral area S skin .
(7)求取手部皮肤比例因子根据预设的比例阈值S=0.55进行手持物体存在判断,当s<S时视为存在手持物体,反之则不存在。(7) Obtain the scale factor of hand skin According to the preset ratio threshold value S=0.55, the existence of the hand-held object is judged, and when s<S, it is regarded as the existence of the hand-held object, otherwise it does not exist.
以上所述实施例仅表达本发明的实施方式,但并不能因此而理解为对本发明专利的范围的限制,应当指出,对于本领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些均属于本发明的保护范围。The above-mentioned embodiments only represent the embodiments of the present invention, but should not be construed as a limitation on the scope of the present invention. It should be pointed out that for those skilled in the art, without departing from the concept of the present invention, Several modifications and improvements can also be made, which all belong to the protection scope of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010326599.3A CN111553891B (en) | 2020-04-23 | 2020-04-23 | A method for detecting the presence of a hand-held object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010326599.3A CN111553891B (en) | 2020-04-23 | 2020-04-23 | A method for detecting the presence of a hand-held object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111553891A true CN111553891A (en) | 2020-08-18 |
CN111553891B CN111553891B (en) | 2022-09-27 |
Family
ID=72001591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010326599.3A Active CN111553891B (en) | 2020-04-23 | 2020-04-23 | A method for detecting the presence of a hand-held object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111553891B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113128435A (en) * | 2021-04-27 | 2021-07-16 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
CN114302050A (en) * | 2020-09-22 | 2022-04-08 | 阿里巴巴集团控股有限公司 | Image processing method and device, and non-volatile storage medium |
CN114626183A (en) * | 2020-12-14 | 2022-06-14 | 南京理工大学 | Safety model characterization and quantitative evaluation method for test assembly process |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470339A (en) * | 2018-03-21 | 2018-08-31 | 华南理工大学 | A kind of visual identity of overlapping apple and localization method based on information fusion |
CN110648367A (en) * | 2019-08-15 | 2020-01-03 | 大连理工江苏研究院有限公司 | Geometric object positioning method based on multilayer depth and color visual information |
-
2020
- 2020-04-23 CN CN202010326599.3A patent/CN111553891B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470339A (en) * | 2018-03-21 | 2018-08-31 | 华南理工大学 | A kind of visual identity of overlapping apple and localization method based on information fusion |
CN110648367A (en) * | 2019-08-15 | 2020-01-03 | 大连理工江苏研究院有限公司 | Geometric object positioning method based on multilayer depth and color visual information |
Non-Patent Citations (1)
Title |
---|
黄朝美等: "基于信息融合的移动机器人目标识别与定位", 《计算机测量与控制》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114302050A (en) * | 2020-09-22 | 2022-04-08 | 阿里巴巴集团控股有限公司 | Image processing method and device, and non-volatile storage medium |
CN114626183A (en) * | 2020-12-14 | 2022-06-14 | 南京理工大学 | Safety model characterization and quantitative evaluation method for test assembly process |
CN113128435A (en) * | 2021-04-27 | 2021-07-16 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
CN113128435B (en) * | 2021-04-27 | 2022-11-22 | 南昌虚拟现实研究院股份有限公司 | Hand region segmentation method, device, medium and computer equipment in image |
Also Published As
Publication number | Publication date |
---|---|
CN111553891B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102141398B (en) | Multi-robot position and attitude measurement method based on monocular vision | |
CN105225482B (en) | Vehicle detecting system and method based on binocular stereo vision | |
CN109297413B (en) | Visual measurement method for large-scale cylinder structure | |
CN107462223B (en) | A kind of automatic measuring device and measuring method of vehicle sight distance before road turning | |
CN110555889A (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN104899869A (en) | Plane and barrier detection method based on RGB-D camera and attitude sensor | |
CN108733039A (en) | The method and apparatus of navigator fix in a kind of robot chamber | |
CN105468033B (en) | A kind of medical arm automatic obstacle-avoiding control method based on multi-cam machine vision | |
CN105809689B (en) | Hull six degree of freedom measurement method based on machine vision | |
CN102368158A (en) | Navigation positioning method of orchard machine | |
US10922824B1 (en) | Object tracking using contour filters and scalers | |
CN112734921B (en) | An underwater three-dimensional map construction method based on sonar and visual image stitching | |
CN108549381A (en) | A kind of unmanned boat obstacle avoidance apparatus and method based on image vision | |
CN113016331B (en) | Wide-narrow row ratoon rice harvesting regulation and control system and method based on binocular vision | |
CN103646249A (en) | Greenhouse intelligent mobile robot vision navigation path identification method | |
CN111553891B (en) | A method for detecting the presence of a hand-held object | |
CN106970620A (en) | A kind of robot control method based on monocular vision | |
CN102422832B (en) | Visual spraying location system and location method | |
Song et al. | Navigation algorithm based on semantic segmentation in wheat fields using an RGB-D camera | |
CN108074265A (en) | A kind of tennis alignment system, the method and device of view-based access control model identification | |
CN116429098A (en) | Visual navigation positioning method and system for low-speed unmanned aerial vehicle | |
CN113863966A (en) | Segment grabbing pose detection device and detection method based on deep learning vision | |
Liu et al. | A new measurement method of real-time pose estimation for an automatic hydraulic excavator | |
CN109410272B (en) | A transformer nut identification and positioning device and method | |
CN111833308A (en) | A Kinect-based respiratory motion monitoring method and monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |