CN100393486C - A Fast Tracking Method Based on Object Surface Color - Google Patents

A Fast Tracking Method Based on Object Surface Color Download PDF

Info

Publication number
CN100393486C
CN100393486C CNB2004100688713A CN200410068871A CN100393486C CN 100393486 C CN100393486 C CN 100393486C CN B2004100688713 A CNB2004100688713 A CN B2004100688713A CN 200410068871 A CN200410068871 A CN 200410068871A CN 100393486 C CN100393486 C CN 100393486C
Authority
CN
China
Prior art keywords
image
camera
area
color
fast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100688713A
Other languages
Chinese (zh)
Other versions
CN1721144A (en
Inventor
赵晓光
谭民
杜欣
汪建华
徐德
李原
梁自泽
景奉水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CNB2004100688713A priority Critical patent/CN100393486C/en
Publication of CN1721144A publication Critical patent/CN1721144A/en
Application granted granted Critical
Publication of CN100393486C publication Critical patent/CN100393486C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

一种基于物体表面颜色的快速跟踪方法与装置,在计算机中安装图像采集卡,通过摄像机和图像采集卡,将运动物体的图像采集到计算机中。然后采用特定的图像处理算法,根据物体表面颜色块的特性,选取出需要的物体,给出物体图像的质心位置。将物体图像质心的位置与给定图像点位置的差作为反馈控制量,控制机器人运动,从而带动摄像机运动,实现对物体的快速跟踪。本发明图像处理方法简洁,速度快,独立成为一个单元,适应性强,移植性强。图像处理中采用了基于颜色信息的学习方法,对物体变化、环境光线的改变有很好的适应性。能够始终保持物体在摄像机的视野内。本发明适合于智能监控、工业产品自动检测、流水线视觉控制等领域。

Figure 200410068871

A fast tracking method and device based on the surface color of an object. An image acquisition card is installed in a computer, and images of moving objects are collected into the computer through a camera and the image acquisition card. Then use a specific image processing algorithm to select the required object according to the characteristics of the color block on the surface of the object, and give the centroid position of the object image. The difference between the position of the center of mass of the object image and the position of the given image point is used as the feedback control amount to control the movement of the robot, thereby driving the camera movement, and realizing fast tracking of the object. The image processing method of the invention is simple and fast, and independently becomes a unit, and has strong adaptability and transplantability. The learning method based on color information is adopted in image processing, which has good adaptability to changes in objects and ambient light. Ability to keep objects within the camera's field of view at all times. The invention is suitable for the fields of intelligent monitoring, automatic detection of industrial products, visual control of assembly lines and the like.

Figure 200410068871

Description

一种基于物体表面颜色的快速跟踪方法 A Fast Tracking Method Based on Object Surface Color

技术领域 technical field

本发明属于机器人领域中的视觉跟踪技术领域,具体地说是用于获得运动物体的表面图像,根据图像的颜色特征,选取出特定物体并实现快速跟踪的方法和装置。The invention belongs to the technical field of visual tracking in the field of robots, in particular to a method and device for obtaining a surface image of a moving object, selecting a specific object and realizing fast tracking according to the color characteristics of the image.

背景技术 Background technique

目前,在基于视觉的快速运动物体跟踪研究中,需要被跟踪的物体具有明显的颜色特征,所以,都采用在物体表面粘贴单一颜色色标的方法(其典型结构见“胡英、赵姝颖、徐心和,色标设计与辨识算法研究,中国图像图形学报,第7卷(A版),12期,2002年12月,1291~1295页”)。粘贴色标的方法有一定局限性,不能适用于智能监控、流水线零件跟踪等场合。At present, in the vision-based fast-moving object tracking research, the object to be tracked has obvious color characteristics, so the method of pasting a single color color scale on the surface of the object is used (see "Hu Ying, Zhao Shuying, Xu Xin for its typical structure. and, Research on Color Scale Design and Recognition Algorithm, Chinese Journal of Image and Graphics, Vol. 7 (Edition A), No. 12, December 2002, pp. 1291~1295"). The method of pasting the color code has certain limitations and cannot be applied to occasions such as intelligent monitoring and assembly line parts tracking.

发明内容 Contents of the invention

本发明的目的是提供一种基于物体表面颜色的快速跟踪方法,可适用于视觉跟踪的简单、快速、有效的图像处理方法。The object of the present invention is to provide a fast tracking method based on the surface color of an object, which is a simple, fast and effective image processing method suitable for visual tracking.

本发明的另一个目的是提供一种实现基于物体表面颜色的快速跟踪方法的装置。Another object of the present invention is to provide a device for implementing a fast tracking method based on the surface color of an object.

为了实现上述目的,本发明的技术方案是提供一种基于物体表面颜色的快速跟踪方法,其在物体运动过程中,按照如下步骤进行图像识别:In order to achieve the above object, the technical solution of the present invention is to provide a fast tracking method based on the surface color of the object, which performs image recognition according to the following steps during the movement of the object:

第一步:首先将需要进行识别和跟踪的物体从背景中分离出来,然后实时采集图像,对每一幅图像,经过与计算出的HSV的阈值比较,将符合颜色范围的象素区域保留下来,其余部分作为背景剔除,将物体图像分割出来:Step 1: First, separate the object that needs to be identified and tracked from the background, and then collect the image in real time. For each image, after comparing with the calculated HSV threshold, the pixel area that meets the color range is reserved. , and the rest is removed as the background to segment the object image:

F(x,y)=1(t1<=F(x,y)<=t2)F(x,y)=1(t1<=F(x,y)<=t2)

F(x,y)=0(其它);F(x, y) = 0 (other);

第二步:将已经分割的物体二值化,生成二值化的黑白图像,并对该二值图像滤波处理,得到平滑的黑白图像;Step 2: binarize the segmented object to generate a binarized black-and-white image, and filter the binary image to obtain a smooth black-and-white image;

第三步:用Canny算子锐化边缘,并用膨胀算法以达到去除小孔的效果;Step 3: use the Canny operator to sharpen the edge, and use the expansion algorithm to achieve the effect of removing small holes;

第四步:使用边缘提取算法,得到物体的轮廓;Step 4: Use the edge extraction algorithm to get the outline of the object;

第五步:基于形状特征的图像识别,根据被跟踪物体的几何模型,剔除不符合几何模型的像素区域,找到被跟踪物体图像的质心;Step 5: Image recognition based on shape features, according to the geometric model of the tracked object, eliminate the pixel area that does not conform to the geometric model, and find the centroid of the image of the tracked object;

第六步:确定物体图像的质心后,将物体图像质心的位置与给定图像点位置的差作为反馈控制量,控制机器人带动摄像机运动,将物体的图像始终保持在摄像机的视野内,跟踪该运动物体。Step 6: After determining the centroid of the object image, take the difference between the position of the centroid of the object image and the position of the given image point as the feedback control amount, control the robot to drive the camera to move, keep the image of the object within the field of view of the camera, and track the moving objects.

所述的快速跟踪方法,其所述第五步:基于形状特征的图像识别,是使用形状参数,形状参数F在一定程度上描述了区域的紧凑性,它是根据区域的周长B和区域的面积A计算出来的:In the fast tracking method, the fifth step: image recognition based on shape features uses shape parameters, and the shape parameter F describes the compactness of the region to a certain extent, which is based on the perimeter B of the region and the region The area A is calculated from:

F=B*B/(4*PI*A)F=B*B/(4*PI*A)

其中,形状参数对圆形区域取到最小值1,而当区域为其它形状时F总大于1。Among them, the shape parameter takes the minimum value of 1 for the circular area, and F is always greater than 1 when the area is of other shapes.

所述的快速跟踪方法,其还包括在运动跟踪之前,进行学习,其采用在线学习的方法:a)在进行跟踪任务之前,经过图象采集卡,得到一幅数字化的RGB彩色图像;b)用户使用鼠标选择需要跟踪的物体的矩形区域;c)操作系统将选取的局部图像以BMP文件形式存储在计算机上,作为后面识别所需要的阈值和对实时采集的图像进行分割的依据;d)将该局部彩色图像转化为HSV模型,对其H、S两个分量分别做直方图,得到选定区域的H、S阈值。Described fast tracking method, it also comprises before motion tracking, learns, and it adopts the method for online learning: a) before carrying out tracking task, obtain a digitized RGB color image through image acquisition card; b) The user uses the mouse to select the rectangular area of the object to be tracked; c) the operating system stores the selected partial image on the computer in the form of a BMP file, which is used as the threshold for subsequent recognition and the basis for segmenting the real-time collected image; d) The local color image is transformed into an HSV model, and the histograms of its H and S components are made respectively to obtain the H and S thresholds of the selected area.

所述的快速跟踪方法,其所述H、S阈值,在随后的实时图像识别中,该阈值作为物体分割的标准不会变化,直到用户重新进行学习。In the fast tracking method, the H and S thresholds, in the subsequent real-time image recognition, the thresholds will not change as the criteria for object segmentation until the user re-learns.

所述的快速跟踪方法,其在不对摄像机进行标定的情况下,利用图像中的给定点与物体质心的误差为控制反馈量,实现快速、准确的视觉跟踪。In the fast tracking method, without calibrating the camera, the error between a given point in the image and the center of mass of the object is used as the control feedback to realize fast and accurate visual tracking.

所述的快速跟踪方法,其以物体表面颜色信息为依据,识别物体,进行视觉跟踪。The fast tracking method, based on the color information of the object surface, recognizes the object and performs visual tracking.

一种实现基于物体表面颜色的快速跟踪方法的装置,包括机器人、机器人控制系统,视觉处理系统组成,其机器人控制系统由主控计算机和机器人控制器组成,视觉处理系统由摄像机、图像采集卡及图像处理计算机组成,其中,摄像机安装在机器人末端,摄像机输出端接图像采集卡,图像采集卡置于图像处理计算机内,机器人与机器人控制器电连接,机器人控制器与图像处理计算机分别和主计算机电连接。A device for realizing a fast tracking method based on the surface color of an object, comprising a robot, a robot control system, and a vision processing system. The robot control system is composed of a main control computer and a robot controller. The vision processing system is composed of a camera, an image acquisition card and Composition of image processing computer, wherein the camera is installed at the end of the robot, the output terminal of the camera is connected to the image acquisition card, the image acquisition card is placed in the image processing computer, the robot is electrically connected to the robot controller, the robot controller and the image processing computer are respectively connected to the main computer electrical connection.

所述的装置,其所述机器人为五自由度机器人,是由一个三自由度的直角坐标机器人和一个两自由度的旋转手腕组成,旋转手腕安装在直角坐标机器人垂直轴的末端,该旋转手腕上固接有摄像机;机器人由一台主控计算机和机器人控制器控制。The device, wherein the robot is a five-degree-of-freedom robot, is composed of a three-degree-of-freedom rectangular coordinate robot and a two-degree-of-freedom rotating wrist, the rotating wrist is installed at the end of the vertical axis of the rectangular coordinate robot, and the rotating wrist A camera is fixedly connected to the robot; the robot is controlled by a master computer and a robot controller.

所述的装置,其所述图像采集卡及图像处理计算机,是选用PCI总线图象采集卡,将图像卡安装在主频为≥2.8G的通用PC机中,构成图像处理系统。Described device, its described image acquisition card and image processing computer are to select the PCI bus image acquisition card for use, and the image card is installed in the general-purpose PC that main frequency is ≥ 2.8G, constitutes the image processing system.

本发明的突出特点是摄像机不需要标定,不需要粘贴色标,能够对表面覆盖多种颜色的物体进行快速跟踪。The outstanding feature of the present invention is that the camera does not need to be calibrated, and does not need to paste color marks, and can quickly track objects whose surfaces are covered with multiple colors.

本发明图像处理方法简洁,速度快,效果好,独立成为一个单元,适应性强,移植性强。图像处理中采用了基于颜色信息的学习方法,对物体变化、环境光线的改变有很好的适应性。在不对摄像机进行标定的情况下,使用摄像机和图像采集卡,得到运动物体的图像,采用特别的图像处理算法,对物体表面的颜色特征和颜色块的面积进行学习,学习的结果作为运动物体跟踪过程中,识别物体和界定物体的标准。得到被跟踪物体的图像后,计算出图像质心位置c(uc,vc),以物体图像质心位置c(uc,vc)与图像中给定点s(us,vs)之间的像素差e,作为视觉反馈量,控制机器人带动摄像机运动,利用机器人的旋转关节跟踪运动物体,反映迅速,跟踪速度快,能够始终保持物体的图像在摄像机的视野内。本发明阐述的视觉处理方法,对环境光线的变化不敏感,而且适用于表面多种颜色覆盖的运动物体跟踪。The image processing method of the invention is simple, fast, and effective, and independently forms a unit, with strong adaptability and strong portability. In the image processing, the learning method based on color information is adopted, which has good adaptability to changes in objects and ambient light. In the case of not calibrating the camera, use the camera and the image acquisition card to obtain the image of the moving object, and use a special image processing algorithm to learn the color features and the area of the color block on the surface of the object, and the learning result is used as the moving object tracking During the process, objects are identified and criteria for defining objects. After obtaining the image of the tracked object, calculate the image centroid position c(u c , v c ), the distance between the object image centroid position c(u c , v c ) and the given point s(u s , v s ) in the image The pixel difference e is used as the visual feedback to control the robot to drive the camera to move, and use the robot's rotating joints to track the moving object. The response is fast, the tracking speed is fast, and the image of the object can always be kept in the camera's field of view. The visual processing method described in the invention is not sensitive to changes in ambient light, and is suitable for tracking moving objects covered by multiple colors on the surface.

本发明适合于智能监控、工业产品自动检测、流水线视觉控制等领域。The invention is suitable for the fields of intelligent monitoring, automatic detection of industrial products, visual control of pipelines and the like.

附图说明 Description of drawings

图1为本发明实现基于物体表面颜色的快速跟踪方法的装置的原理图;Fig. 1 is the schematic diagram of the device for realizing the fast tracking method based on the surface color of the object in the present invention;

图2为本发明基于物体表面颜色的快速跟踪方法对运动图像处理过程示意图。FIG. 2 is a schematic diagram of the process of processing a moving image by the fast tracking method based on the surface color of an object in the present invention.

具体实施方式 Detailed ways

一种实现基于物体表面颜色的快速跟踪方法的装置,包括机器人、机器人控制系统,视觉处理系统组成,整体装置的原理如图1所示。机器人控制系统装置由主控计算机和机器人控制器组成,视觉处理系统由摄像机、图像采集卡,以及图像处理计算机组成。其中,本发明将摄像机安装在机器人末端,摄像机输出端接图像采集卡,图像采集卡置于图像处理计算机内,机器人与机器人控制器电连接,机器人控制器与图像处理计算机分别和主计算机电连接。A device for implementing a fast tracking method based on the surface color of an object, including a robot, a robot control system, and a vision processing system. The principle of the overall device is shown in Figure 1. The robot control system device is composed of a main control computer and a robot controller, and the vision processing system is composed of a camera, an image acquisition card, and an image processing computer. Wherein, in the present invention, the camera is installed at the end of the robot, the output terminal of the camera is connected to the image acquisition card, the image acquisition card is placed in the image processing computer, the robot is electrically connected to the robot controller, and the robot controller and the image processing computer are respectively electrically connected to the main computer .

在图像处理算法中,采用在线学习的方法,在进行跟踪任务之前,经过图象采集卡,得到一幅数字化的RGB彩色图像。用户使用鼠标选择需要跟踪的物体的矩形区域。系统将选取的局部图像以BMP文件形式存储在计算机上,作为后面识别需要的阈值和对实时采集的图像进行分割的依据。将该局部彩色图像转化为HSV模型,对其H、S两个分量分别做直方图,得到选定区域的H、S阈值。在随后的实时图像识别中,该阈值作为物体分割的标准不会变化,直到用户重新进行学习。In the image processing algorithm, the online learning method is used to obtain a digitized RGB color image through the image acquisition card before performing the tracking task. The user uses the mouse to select a rectangular area of the object to be tracked. The system stores the selected partial images on the computer in the form of BMP files, which will be used as the threshold for later recognition and the basis for segmenting real-time collected images. The local color image is transformed into an HSV model, and the histograms of its H and S components are made respectively to obtain the H and S thresholds of the selected area. In the subsequent real-time image recognition, the threshold will not change as a criterion for object segmentation until the user re-learns.

这种学习过程的好处是在跟踪物体变化的情况下,无需对程序内部进行任何改动,每次当条件变化,比如光线发生明显变化,被跟踪物体变化的情况下,只要在跟踪前,拍摄一幅当前图片,用鼠标选中被跟踪物体就完成了学习的过程。The advantage of this learning process is that when the tracking object changes, there is no need to make any changes to the program. Every time when the conditions change, such as the light changes significantly, and the tracked object changes, you only need to take a picture before tracking. The current picture, select the object to be tracked with the mouse to complete the learning process.

跟踪开始时,程序首先读取物体的区部图片BMP文件,对该BMP文件生成跟踪物体的HSV直方图以及阈值,图像卡以并行工作的方式实时采集图像,每幅图像都与该阈值进行比较,剔除背景,分割物体,找到物体的图像边缘和中心点。在被跟踪物体没有发生变化,光线也没有强烈变化的情况下,都不需要进行重新学习,直至跟踪过程完成。完整的处理过程如图2所示。When the tracking starts, the program first reads the BMP file of the image of the area of the object, and generates the HSV histogram and threshold of the tracking object for the BMP file. The image card collects images in parallel in real time, and compares each image with the threshold , remove the background, segment the object, and find the image edge and center of the object. In the case that the tracked object does not change and the light does not change strongly, there is no need to re-learn until the tracking process is completed. The complete process is shown in Figure 2.

在运动物体跟踪时,机器人带动摄像机运动,使运动物体始终处于摄像机的视野范围内,这个过程中,物体图象识别的步骤如下:When the moving object is tracked, the robot drives the camera to move, so that the moving object is always within the field of view of the camera. During this process, the steps of object image recognition are as follows:

第一步:首先将需要进行识别和跟踪的物体从背景中分离出来。背景即图像上静止不动的象素的集合,它不属于任何在摄像机前运动的物体。然后实时采集图像,对每一幅图像,经过与刚才计算出的HSV的阈值比较,将符合颜色范围内的象素区域保留下来,其余部分作为背景剔除,将物体得图像分割出来。Step 1: First, separate the object to be recognized and tracked from the background. The background is the collection of stationary pixels on the image, it does not belong to any moving object in front of the camera. Then collect images in real time. For each image, after comparing with the HSV threshold value just calculated, keep the pixel area that meets the color range, and remove the rest as the background to segment the image of the object.

F(x,y)=1(t1<=F(x,y)<=t2)F(x,y)=1(t1<=F(x,y)<=t2)

F(x,y)=0(其它)F(x, y) = 0 (other)

第二步:将已经分割的物体二值化,生成二值化的黑白图像,并对该二值图像滤波处理,得到平滑的黑白图像;Step 2: binarize the segmented object to generate a binarized black-and-white image, and filter the binary image to obtain a smooth black-and-white image;

第三步:用Canny算子锐化边缘,并用膨胀算法以达到去除小孔的效果;Step 3: use the Canny operator to sharpen the edge, and use the expansion algorithm to achieve the effect of removing small holes;

第四步:使用边缘提取算法,得到物体的轮廓;Step 4: Use the edge extraction algorithm to get the outline of the object;

第五步:基于形状特征的图像识别,根据物体的几何模型,找到物体的中心点。使用形状参数,形状参数F在一定程度上描述了区域的紧凑性,它是根据区域的周长B和区域的面积A计算出来的:Step 5: Image recognition based on shape features, find the center point of the object according to the geometric model of the object. Using the shape parameter, the shape parameter F describes the compactness of the region to a certain extent, which is calculated from the perimeter B of the region and the area A of the region:

F=B*B/(4*PI*A)F=B*B/(4*PI*A)

其中,形状参数对圆形区域取到最小值1,而当区域为其它形状时,F总大于1。例如在识别球状物体的时候,首先通过面积阈值将面积过小的噪音虑除。然后考虑F最接近1的区域,可以将图片中规则的圆与其它形状区别出来。如果需要识别的其它规则形状,比如正方形,可以通过正方形的特征得到正方形的F值接近于4/PI(=1.3)。Among them, the shape parameter takes the minimum value of 1 for the circular area, and when the area is of other shapes, F is always greater than 1. For example, when recognizing a spherical object, the noise with too small area is firstly filtered out by the area threshold. Then considering the area where F is closest to 1, the regular circle in the picture can be distinguished from other shapes. If other regular shapes need to be recognized, such as a square, the F value of the square can be obtained through the characteristics of the square to be close to 4/PI (=1.3).

第六步:确定物体的中心点后,控制机器人带动摄像机运动,将物体的图像始终保持在摄像机的视野内,跟踪该运动物体。Step 6: After determining the center point of the object, control the robot to drive the camera to move, keep the image of the object in the field of view of the camera, and track the moving object.

下面给出本发明的一个实例。实例中,将摄像机安装在一台五自由度的机器人末端,机器人是由一个三自由度的直角坐标机器人和一个两自由度的旋转手腕组成,旋转手腕安装在直角坐标机器人垂直轴的末端,机器人由一台主控制计算机和控制器控制。将一台工业用标准彩色摄像机固定在旋转手腕上,选用OK系列PCI总线图象采集卡,将图像卡安装在主频为2.8G的通用PC机中,构成图像处理系统。整个装置的工作原理如图1所示。An example of the present invention is given below. In the example, the camera is installed at the end of a five-degree-of-freedom robot. The robot is composed of a three-degree-of-freedom rectangular coordinate robot and a two-degree-of-freedom rotating wrist. The rotating wrist is installed at the end of the vertical axis of the rectangular coordinate robot. The robot Controlled by a main control computer and controller. Fix an industrial standard color camera on the rotating wrist, select the OK series PCI bus image acquisition card, and install the image card in a general-purpose PC with a main frequency of 2.8G to form an image processing system. The working principle of the whole device is shown in Figure 1.

应用实例系统,在自然采光照射下,对一台遥控小汽车进行跟踪。遥控车表面为黄绿相间的颜色,前后车窗为黑色,使用本发明中描述的学习方法,在运动跟踪之前,进行学习,得到遥控车表面颜色的H、S阈值。使用遥控器控制小车运动,按照图2所示的流程,采用第一步到第六步的图像识别方法,实现了遥控小车的运动跟踪。The application example system is used to track a remote control car under natural lighting. The surface of the remote control car is yellow and green, and the front and rear windows are black. Using the learning method described in the present invention, before the motion tracking, learning is carried out to obtain the H and S thresholds of the surface color of the remote control car. Use the remote control to control the movement of the car. According to the process shown in Figure 2, the image recognition method from the first step to the sixth step is used to realize the motion tracking of the remote control car.

可见,本发明中的方法和装置,能够在摄像机无标定情况下,不需要粘贴色标,对表面颜色复杂的物体,实现快速视觉跟踪。It can be seen that the method and device of the present invention can realize fast visual tracking of objects with complicated surface colors without the need for pasting color marks when the camera is not calibrated.

Claims (6)

1.一种基于物体表面颜色的快速跟踪方法,其特征在于:在物体运动过程中,按照如下步骤进行图像识别:1. A fast tracking method based on object surface color, characterized in that: in the process of object motion, image recognition is carried out according to the following steps: 第一步:首先将需要进行识别和跟踪的物体从背景中分离出来,然后实时采集图像,对每一幅图像,经过与计算出的HSV的阈值比较,将符合颜色范围的象素区域保留下来,其余部分作为背景剔除,将物体图像分割出来,这一步会分割出多个颜色相近的物体:Step 1: First, separate the object that needs to be identified and tracked from the background, and then collect the image in real time. For each image, after comparing with the calculated HSV threshold, the pixel area that meets the color range is reserved. , and the rest will be removed as the background, and the object image will be segmented. This step will segment multiple objects with similar colors: F(x,y)=1(t1<=F(x,y)<=t2)F(x,y)=1(t1<=F(x,y)<=t2) F(x,y)=0(其它);F(x, y) = 0 (other); 第二步:将已经分割的物体图像二值化,生成二值化的黑白图像,并对该二值图像滤波处理,得到平滑的黑白图像;Step 2: binarize the segmented object image to generate a binarized black-and-white image, and filter the binary image to obtain a smooth black-and-white image; 第三步:用Canny算子锐化边缘,并用膨胀算法以达到去除小孔的效果;Step 3: use the Canny operator to sharpen the edge, and use the expansion algorithm to achieve the effect of removing small holes; 第四步:使用边缘提取算法,得到物体的轮廓;Step 4: Use the edge extraction algorithm to get the outline of the object; 第五步:基于形状特征的图像识别,根据被跟踪物体的几何模型,剔除不符合几何模型的像素区域,找到被跟踪物体图像的质心;Step 5: Image recognition based on shape features, according to the geometric model of the tracked object, eliminate the pixel area that does not conform to the geometric model, and find the centroid of the image of the tracked object; 第六步:确定物体图像的质心后,将物体图像质心的位置与给定图像点位置的差作为反馈控制量,控制机器人带动摄像机运动,将物体的图像始终保持在摄像机的视野内,跟踪该运动物体。Step 6: After determining the centroid of the object image, take the difference between the position of the centroid of the object image and the position of the given image point as the feedback control amount, control the robot to drive the camera to move, keep the image of the object within the field of view of the camera, and track the moving objects. 2.如权利要求1所述的快速跟踪方法,其特征在于:所述第五步:基于形状特征的图像识别,是使用形状参数,形状参数F在一定程度上描述了区域的紧凑性,它是根据区域的周长B和区域的面积A计算出来的:2. The fast tracking method as claimed in claim 1, characterized in that: the fifth step: the image recognition based on shape features is to use shape parameters, and the shape parameter F describes the compactness of the region to a certain extent, it It is calculated based on the perimeter B of the area and the area A of the area: F=B*B/(4*PI*A)F=B*B/(4*PI*A) 其中,形状参数对圆形区域取到最小值1,而当区域为其它形状时F总大于1。Among them, the shape parameter takes the minimum value of 1 for the circular area, and F is always greater than 1 when the area is of other shapes. 3.如权利要求1所述的快速跟踪方法,其特征在于:还包括在运动跟踪之前,进行学习,其采用在线学习的方法:a)在进行跟踪任务之前,经过图象采集卡,得到一幅数字化的RGB彩色图像;b)用户使用鼠标选择需要跟踪的物体的矩形区域;c)操作系统将选取的局部图像以BMP文件形式存储在计算机上,作为后面识别所需要的阈值和对实时采集的图像进行分割的依据;d)将该局部彩色图像转化为HSV模型,对其H、S两个分量分别做直方图,得到选定区域的H、S阈值。3. fast tracking method as claimed in claim 1, it is characterized in that: also comprise before motion tracking, learn, and it adopts the method for on-line learning: a) before carrying out tracking task, through image acquisition card, obtain a A digitized RGB color image; b) the user uses the mouse to select the rectangular area of the object to be tracked; c) the operating system stores the selected partial image on the computer in the form of a BMP file, which is used as the threshold for later identification and for real-time acquisition The basis for segmenting the image; d) convert the local color image into an HSV model, and make histograms of its H and S components respectively to obtain the H and S thresholds of the selected area. 4.如权利要求3所述的快速跟踪方法,其特征在于:所述H、S阈值,在随后的实时图像识别中,该阈值作为物体分割的标准不会变化,直到用户重新进行学习。4. The fast tracking method according to claim 3, characterized in that: the H and S thresholds, in the subsequent real-time image recognition, the thresholds will not change as the criteria for object segmentation until the user relearns. 5.如权利要求1所述的快速跟踪方法,其特征在于:在不对摄像机进行标定的情况下,利用图像中的给定点与物体质心的误差为控制反馈量,实现快速、准确的视觉跟踪。5. The fast tracking method according to claim 1, characterized in that: without calibrating the camera, the error between a given point in the image and the center of mass of the object is used as the control feedback amount to realize fast and accurate visual tracking . 6.如权利要求1所述的快速跟踪方法,其特征在于:以物体表面颜色信息为依据,识别物体,进行视觉跟踪。6. The fast tracking method according to claim 1, characterized in that: the object is recognized and visually tracked based on the color information of the object surface.
CNB2004100688713A 2004-07-13 2004-07-13 A Fast Tracking Method Based on Object Surface Color Expired - Fee Related CN100393486C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100688713A CN100393486C (en) 2004-07-13 2004-07-13 A Fast Tracking Method Based on Object Surface Color

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100688713A CN100393486C (en) 2004-07-13 2004-07-13 A Fast Tracking Method Based on Object Surface Color

Publications (2)

Publication Number Publication Date
CN1721144A CN1721144A (en) 2006-01-18
CN100393486C true CN100393486C (en) 2008-06-11

Family

ID=35911929

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100688713A Expired - Fee Related CN100393486C (en) 2004-07-13 2004-07-13 A Fast Tracking Method Based on Object Surface Color

Country Status (1)

Country Link
CN (1) CN100393486C (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030244B (en) * 2006-03-03 2010-08-18 中国科学院自动化研究所 Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic
CN101453660B (en) * 2007-12-07 2011-06-08 华为技术有限公司 Video object tracking method and apparatus
CN101685309B (en) * 2008-09-24 2011-06-08 中国科学院自动化研究所 Method for controlling multi-robot coordinated formation
CN101587591B (en) * 2009-05-27 2010-12-08 北京航空航天大学 Vision Accurate Tracking Method Based on Two-parameter Threshold Segmentation
CN101783964A (en) * 2010-03-18 2010-07-21 上海乐毅信息科技有限公司 Auxiliary driving system for achromate or tritanope based on image identification technology
CN101913147B (en) * 2010-07-12 2011-08-17 中国科学院长春光学精密机械与物理研究所 High-precision fully-automatic large transfer system
CN101964114B (en) * 2010-09-16 2013-02-27 浙江吉利汽车研究院有限公司 Auxiliary traffic light recognition system for anerythrochloropsia drivers
CN102096927A (en) * 2011-01-26 2011-06-15 北京林业大学 Target tracking method of independent forestry robot
CN102431034B (en) * 2011-09-05 2013-11-20 天津理工大学 Color recognition-based robot tracking method
CN102917171B (en) * 2012-10-22 2015-11-18 中国南方电网有限责任公司超高压输电公司广州局 Based on the small target auto-orientation method of pixel
CN103056864A (en) * 2013-01-24 2013-04-24 上海理工大学 Device and method for detecting position and angle of wheeled motion robot in real time
CN103177259B (en) * 2013-04-11 2016-05-18 中国科学院深圳先进技术研究院 Color lump recognition methods
CN104281832A (en) * 2013-07-04 2015-01-14 上海高威科电气技术有限公司 Visual identity industrial robot
CN103895023B (en) * 2014-04-04 2015-08-19 中国民航大学 A kind of tracking measurement method of the mechanical arm tail end tracing measurement system based on coding azimuth device
KR102591960B1 (en) * 2014-12-05 2023-10-19 에이알에스 에스.알.엘. Device for orienting parts, particularly for gripping by robots, automation means and the like
CN106934813A (en) * 2015-12-31 2017-07-07 沈阳高精数控智能技术股份有限公司 A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN106096599B (en) * 2016-04-28 2019-03-26 浙江工业大学 A kind of inside truck positioning method based on painting color lump
CN107403437A (en) * 2016-05-19 2017-11-28 上海慧流云计算科技有限公司 The method, apparatus and robot of robotic tracking's object
CN106863332B (en) * 2017-04-27 2023-07-25 广东工业大学 Robot vision positioning method and system
CN107564037A (en) * 2017-08-07 2018-01-09 华南理工大学 A kind of multirobot detection and tracking based on local feature
CN108032313B (en) * 2018-01-04 2019-05-03 北京理工大学 Robotic hand that automatically completes touch-screen games on smart terminals based on the principle of bionics
CN110666801A (en) * 2018-11-07 2020-01-10 宁波赛朗科技有限公司 Grabbing industrial robot for matching and positioning complex workpieces
CN113103256A (en) * 2021-04-22 2021-07-13 达斯琪(重庆)数字科技有限公司 Service robot vision system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2715931Y (en) * 2004-07-13 2005-08-10 中国科学院自动化研究所 Apparatus for quick tracing based on object surface color

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2715931Y (en) * 2004-07-13 2005-08-10 中国科学院自动化研究所 Apparatus for quick tracing based on object surface color

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
微尺寸视觉精密检测系统设计. 廖强,米林,周忆,徐宗俊.重庆大学学报(自然科学版),第25卷第12期. 2002 *
机器视觉中的摄像机定标方法综述. 吴文琪,孙增圻.计算机应用研究,第2期. 2004 *
色标设计与辨识算法研究. 胡英,赵姝颖,徐心和.中国图象图形学报,第7卷第12期. 2002 *

Also Published As

Publication number Publication date
CN1721144A (en) 2006-01-18

Similar Documents

Publication Publication Date Title
CN100393486C (en) A Fast Tracking Method Based on Object Surface Color
CN105023278B (en) A kind of motion target tracking method and system based on optical flow method
JP6305171B2 (en) How to detect objects in a scene
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
CN105468033B (en) A kind of medical arm automatic obstacle-avoiding control method based on multi-cam machine vision
CN101587591B (en) Vision Accurate Tracking Method Based on Two-parameter Threshold Segmentation
CN102024146B (en) Method for extracting foreground in piggery monitoring video
CN107389701A (en) A kind of PCB visual defects automatic checkout system and method based on image
CN102509085A (en) Pig walking posture identification system and method based on outline invariant moment features
CN111027432B (en) A Vision-Following Robot Method Based on Gait Features
CN109961016B (en) Multi-gesture accurate segmentation method for smart home scene
CN103020632A (en) Fast recognition method for positioning mark point of mobile robot in indoor environment
CN115816460A (en) A Manipulator Grasping Method Based on Deep Learning Target Detection and Image Segmentation
CN105700528A (en) Autonomous navigation and obstacle avoidance system and method for robot
CN104715250B (en) cross laser detection method and device
Bormann et al. Autonomous dirt detection for cleaning in office environments
CN2715931Y (en) Apparatus for quick tracing based on object surface color
Zhang et al. A coarse-to-fine leaf detection approach based on leaf skeleton identification and joint segmentation
CN101477618A (en) Process for pedestrian step gesture periodic automatic extraction from video
CN113689365B (en) A target tracking and positioning method based on Azure Kinect
Getahun et al. A robust lane marking extraction algorithm for self-driving vehicles
CN115100615A (en) An end-to-end lane line detection method based on deep learning
CN104637062A (en) Target tracking method based on particle filter integrating color and SURF (speeded up robust feature)
CN118559940A (en) Adaptive adjustment system and method based on automatic burr removal equipment
CN117226289A (en) Laser cutting system based on image recognition clout detects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080611

Termination date: 20190713