WO2016034104A1 - 自移动表面行走机器人及其图像处理方法 - Google Patents

自移动表面行走机器人及其图像处理方法 Download PDF

Info

Publication number
WO2016034104A1
WO2016034104A1 PCT/CN2015/088757 CN2015088757W WO2016034104A1 WO 2016034104 A1 WO2016034104 A1 WO 2016034104A1 CN 2015088757 W CN2015088757 W CN 2015088757W WO 2016034104 A1 WO2016034104 A1 WO 2016034104A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel points
pixel
edge
floor
Prior art date
Application number
PCT/CN2015/088757
Other languages
English (en)
French (fr)
Inventor
汤进举
Original Assignee
科沃斯机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科沃斯机器人有限公司 filed Critical 科沃斯机器人有限公司
Publication of WO2016034104A1 publication Critical patent/WO2016034104A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to an intelligent robot, and more particularly to a self-moving surface walking robot and a method of image processing in a navigation process.
  • Intelligent cleaning robots include mopping robots, vacuum robots, etc., which combine mobile robot and vacuum cleaner technology, and are the most challenging hot research topics in the field of household appliances. Since 2000, the commercialization of robotic commercial products has been listed one after another, becoming a new high-tech product in the field of service robots, with considerable market prospects.
  • Such an intelligent robot is generally applied to an indoor environment, and a camera is mounted on the body of the robot.
  • the monocular camera visual navigation technology mainly includes image segmentation, obstacle recognition, sensing of the surrounding environment, planning a walking route, etc., by photographing the ground. After that, the image after the shooting is processed to detect obstacle detection and path planning.
  • this method has the following drawbacks: if the edge line of the indoor floor tile floor is too obvious, the robot may mistake the floor edge line as part of the obstacle, so that the obstacle detection and recognition will be serious during image processing. The impact causes the robot to work less efficiently or affect the robot to work.
  • the technical problem to be solved by the present invention is to provide a self-moving surface walking robot and an image processing method thereof for the deficiencies of the prior art, which can make the self-moving surface walking robot improve the accuracy of identifying obstacles during work. Sex and reliability.
  • An image processing method applied to a self-moving surface walking robot includes the following steps:
  • S2 Perform edge binarization on the environment image to obtain a binary image with edge pixels and background pixels image;
  • the method for scanning the binary image described in S3 is to scan line by line and then column by column, or scan column by column and then scan line by line;
  • S3 specifically includes:
  • S4 specifically includes:
  • the method of eliminating the floor edge pixel points A and B in S4 in S5 is to set the pixel values of A and B to 0 in the binary image.
  • the Canny edge detection operator method, the Roberts gradient method, the Sobel edge detection operator method or the Laplacian algorithm are used to calculate the environment image to obtain a binary image.
  • S1' is also included before S2: the collected environmental image is subjected to denoising processing.
  • the image is denoised by Gaussian filtering, median filtering or mean filtering in S1'.
  • the invention also provides a self-moving surface walking robot, the robot comprising: an image collecting unit, a line Walking unit, drive unit, function unit and control unit;
  • the control unit is respectively connected to the functional component, the image acquisition unit and the driving unit, and the driving unit is connected to the walking unit, and the driving unit receives an instruction of the control unit to drive the walking unit to walk,
  • the function component accepts the instruction of the control unit to perform surface walking according to a predetermined working mode, and the control unit processes the image collected by the image capturing unit;
  • the self-moving surface walking robot adopts the image processing method described above.
  • the functional components are cleaning components, waxing components, security alarm components, air purification components or/and polishing components.
  • the self-moving surface walking robot and the image processing method provided by the invention can effectively remove the floor edge line during the robot image preprocessing, leaving only the floor background part and part of the obstacle of the same gray level, which helps to improve Identify the accuracy and reliability of obstacles.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of an image processing method according to Embodiment 2 of the present invention.
  • 3 is an image of a robot after removing noise from a captured image according to the present invention
  • Figure 4 is a binary image after binarization of Figure 3;
  • Figure 5 is an image of Figure 3 after removing the edge line of the floor
  • FIG. 6 is a structural block diagram of a self-moving surface walking robot of the present invention.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in FIG. 1 and in conjunction with FIG. 4-5, the image processing method includes the following steps:
  • the robot collects an environment image by using an image capturing unit (such as a camera), and the environment image is a grayscale image, wherein the image includes an image of an object such as a door, a box, a floor, or the like;
  • an image capturing unit such as a camera
  • the environment image is a grayscale image, wherein the image includes an image of an object such as a door, a box, a floor, or the like;
  • the Canny edge detection operator method and the Roberts gradient method can be used.
  • the Sobel edge detection operator method or the Laplacian algorithm calculates the environment image, and the binary image is a grayscale image including only two gray values, that is, an edge line of an object (such as a door or a box) in the originally acquired image. Edge of an object such as a floor
  • the edge line is embodied as the same gray value in the binary image, and the gray value can be set by itself, as long as it can be distinguished from the background gray value.
  • the edge pixel is grayscaled. The value is set to 255, and the background pixel point gray value is set to 0;
  • S3 Scan the binary image to obtain two adjacent edge pixel points A and B with a spacing not greater than a maximum pixel width threshold of the edge pixel.
  • the step specifically includes:
  • the pixel width threshold m in S3.2 is set to 50. It should be noted that the m value is set according to a floor edge line. One end is far from the camera, and the other end is close to the camera. Therefore, a certain floor edge line with a substantially constant width in the picture taken by the camera may have a wide width, and the distance from the camera becomes larger. The edge line will gradually narrow.
  • 50 is the maximum distance of the entire floor edge line in the picture taken by the camera.
  • 50 is set according to the actual width and width of the floor edge line in an environment. In a different environment, the user You can set the value of parameter m yourself.
  • step S4 After finding two points that satisfy the gap pixel width requirement, it is determined whether the two points are located on the adjacent two floors, that is, proceeding to step S4;
  • S4 determining whether the pixel points A and B are adjacent two floor edge pixel points, and if yes, proceeding to S5, and if not, returning to S3, the step specifically includes:
  • the pixel points C0 and D0 are respectively obtained by extending the P pixel width of the pixel.
  • the value range of the P can be greater than or equal to 3 and less than or equal to 6.
  • the P is set. 5, if S3 is scanning by line, based on the pixel point A0, moving 5 pixels to the left/right distance to obtain C0, based on the pixel point B0, moving 5 pixels to the right/left distance to obtain D0;
  • S5 eliminating the floor edge pixel points A and B in S3 (ie, in the binary image), the specific elimination method is: setting the gray value of the pixel points A and B to 0;
  • the method for scanning the binary image in S2 is to scan line by line and then column by column, or scan column by column and then scan line by line to avoid missing scanning, completely eliminating the horizontal or vertical floor edge line in the image.
  • a pixel point A with a gray value of 255 (corresponding to white) is obtained by row scanning, then the pixel point A is taken as the starting point i, and then the i+50 is searched. Whether there is a pixel with a gray value of 255, if there is a matching pixel point B, the pixel point B is set as the ending point j (if no matching pixel point is found, the pixel point A is discarded, and the next gray value is found.
  • the pixel points A and B are pixel points on the edge lines of two adjacent floor tiles.
  • the gray value of the pixel points A and B in FIG. 3 is set to 0 (corresponding to black), that is, the floor edge pixels are eliminated. Point A, B.
  • the image processing method described in this application may misjudge a slender obstacle on the floor as a floor edge line, but does not affect the application of the method in practice, because: only when When the long obstacle meets the following two conditions, it may be misjudged as the floor edge line, a, the width of the elongated obstacle is smaller than the floor edge line; b, the height of the elongated obstacle is substantially 0, That is, the flat type, otherwise the vacuum cleaner can still detect the obstacle according to the edge in the vertical direction; the elongated obstacle satisfying the above two conditions is rare in practice, and even if the obstacle is actually present, this kind of obstacle Obstacle does not affect the work of the vacuum cleaner.
  • the vacuum cleaner uses the above method to eliminate the floor edge line in order to avoid mistaking the floor edge line as an obstacle and change the walking route to prevent collision/wall, etc., so the vacuum cleaner will After the long obstacle is mistakenly judged as the edge line of the floor and eliminated, it will go directly from the slender obstacle, which will not cause the vacuum cleaner to collide. The elongated and can be swept out of the obstacle.
  • This embodiment is basically the same as the first embodiment, except that: before S2, the method further includes:
  • S1' Denoise the environment image collected in S1 (as shown in Fig. 2).
  • the image may be subjected to noise removal by Gaussian filtering, median filtering or mean filtering.
  • Gaussian filtering Gaussian filtering
  • median filtering mean filtering.
  • the methods are common technical means, and will not be described again; it should be noted that the denoising processing step can be increased or omitted according to actual needs, such as using a camera with a higher resolution to capture the environment image (ie, the camera itself collects the image equivalent to the removed image) noise).
  • FIG. 6 is a structural block diagram of a self-moving surface walking robot according to the present invention.
  • the present invention provides a self-moving surface walking robot, the robot comprising: an image collecting unit 1, a walking unit 2, a driving unit 3, and functional components. 4 and control unit 5;
  • the control unit 5 is respectively connected to the functional component 4, the image acquisition unit 1 and the drive unit 3, and the drive unit 3 is connected to the travel unit 2, and the drive unit 3 accepts an instruction of the control unit 5 to drive
  • the walking unit 2 travels, and the functional component 4 receives the instruction of the control unit 5 to perform surface walking according to a predetermined walking mode.
  • the functional component 4 is a cleaning component, a waxing component, a security alarm component, an air purification component, or/ And the polishing unit, the control unit 5 processes the image acquired by the image acquisition unit 1; the self-moving surface walking robot adopts the image processing method in the above two embodiments. When the floor crack line in the captured image is eliminated, it is more convenient for the robot to walk on the ground, and the obstacle is not mistakenly considered as an obstacle to perform obstacle avoidance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种自移动表面行走机器人及其图像处理的方法,其方法包括:S1:机器人采集环境图像;S2:对环境图像进行边缘二值化处理,得到含边缘像素点和背景像素点的二值图像;S3:扫描该二值图像,得到间距不大于边缘像素点最大像素宽度阈值的两个相邻的边缘像素点A和B;S4:判断像素点A和B是否为相邻两个地板边缘像素点,若是,则进入S5,若否,则返回S3;S5:消除S4中的地板边缘像素点A和B;S6:重复上述步骤:S3、S4和S5,直至消除二值图像中的所有地板边缘像素点。该方法可有效地去除地板边缘线,有助于提高识别障碍物的准确性与可靠性。

Description

自移动表面行走机器人及其图像处理方法 技术领域
本发明涉及一种智能机器人,具体地说,涉及一种自移动表面行走机器人及其在导航过程中图像处理的方法。
背景技术
智能清扫机器人包括拖地机器人、吸尘机器人等,其融合了移动机器人和吸尘器技术,是目前家用电器领域最具挑战性的热门研发课题。从2000年后清扫机器人商用化产品接连上市,成为服务机器人领域中的一种新型高技术产品,具有可观的市场前景。
通常,这种智能机器人一般应用于室内环境,在机器人的机体上安装有摄像头,这种单目摄像头视觉导航技术主要包括图像分割、障碍物识别、感知周围环境、规划行走路线等,通过拍摄地面后,对拍摄后的图像进行处理可检测障碍物检测以及路径规划。而这一方法存在如下缺陷:如果室内环境地板砖地板边缘线过于明显,机器人可能会将该地板边缘线误认为是障碍物的一部分,从而在图像处理时对障碍物检测和识别时会产生严重影响,导致机器人工作效率降低或者影响使机器人无法工作。
基于上述问题,期望提供一种实现在图像预处理时去除这种地板边缘线,只留下同一灰度级别的地板背景部分与障碍物部分,应用于自移动表面行走机器人的去除地面地板边缘线的方法,以及实现该功能的自移动表面行走机器人,从而在机器人工作时有助于提高识别障碍物的准确性与可靠性。
发明内容
本发明所要解决的技术问题在于,针对现有技术的不足提供一种自移动表面行走机器人及其图像处理方法,可以使所述自移动表面行走机器人在工作时有助于提高识别障碍物的准确性与可靠性。
本发明所要解决的技术问题是通过如下技术方案实现的:
一种应用于自移动表面行走机器人的图像处理方法,包括如下步骤:
S1:机器人采集环境图像;
S2:对环境图像进行边缘二值化处理,得到含边缘像素点和背景像素点的二值图 像;
S3:扫描该二值图像,得到间距不大于边缘像素点最大像素宽度阈值的两个相邻的边缘像素点A和B;
S4:判断像素点A和B是否为相邻两个地板边缘像素点,若是,则进入S5,若否,则返回S3;
S5:消除S4中的地板边缘像素点A和B;
S6:重复上述步骤:S3、S4和S5,直至消除二值图像中的所有地板边缘像素点。
为了避免扫描遗漏,S3中所述的扫描二值图像的方法为先逐行扫描再逐列扫描,或者先逐列扫描再逐行扫描;
为了准确的找到边缘像素点A和B,S3具体包括:
S3.1:扫描该二值图像,找出灰度值为255的像素点A;
S3.2:以像素点A为起始点,判断在该起始点向外延伸m个像素宽度内是否有灰度值为255的像素点B,其中m为预设的边缘像素点最大像素宽度阈值,若是则进入步骤S4,若否则返回S3.1;
为了准确判断像素点A和B是否为相邻两个地板边缘像素点,S4具体包括:
S4.1:根据S3中像素点A、B,在环境图像中找到与像素点A、B对应的像素点A0、B0;
S4.2:找到像素点A0、B0后,各自向外延展P个像素宽度的得到像素点C0、D0;
S4.3:判断像素点C0、D0的灰度值是否在(K-v,K+v)的范围内,其中K为地板灰度平均值,v为预设的色差范围,若是则进入步骤S4.4,若否则返回S3;
S4.4:判断像素点C0与D0的灰度差值是否<=n,其中n为除噪图中相邻两块地板的灰度值最大差值,若是则判断像素点A和B为相邻两个地板边缘像素点并进入步骤S5,若否则返回S3;
在S5中消除S4中的地板边缘像素点A和B的方法为:在二值图像中将A、B的像素值设置为0。
更好地,S2中利用Canny边缘检测算子法、Roberts梯度法、Sobel边缘检测算子法或Laplacian算法对环境图像进行计算,得到二值图像。
为了达到更好的图像处理效果,在S2之前还包括S1’:将采集的环境图像进行除噪处理。
更好地,S1’中利用高斯滤波法、中值滤波法或均值滤波法对图像进行除噪处理。
本发明还提供一种自移动表面行走机器人,所述机器人包括:图像采集单元、行 走单元、驱动单元、功能部件和控制单元;
所述控制单元分别与所述功能部件、图像采集单元和驱动单元相连接,驱动单元与所述的行走单元相连接,所述驱动单元接受控制单元的指令,驱动所述行走单元行走,所述功能部件接受控制单元的指令按预定的工作模式进行表面行走,所述控制单元对图像采集单元采集到的图像进行处理;
所述自移动表面行走机器人采用上述的图像处理方法。
更好地,所述的功能部件为清扫部件、打蜡部件、安保报警部件、空气净化部件或/和磨光部件。
本发明所提供的自移动表面行走机器人及其图像处理方法,在机器人图像预处理时可有效地去除地板边缘线,只留下同一灰度级别的地板背景部分与部分障碍物,有助于提高识别障碍物的准确性与可靠性。
下面结合附图和具体实施例,对本发明的技术方案进行详细地说明。
附图说明
图1为本发明实施例一图像处理方法的流程图;
图2为本发明实施例二图像处理方法的流程图;
图3为本发明机器人对采集图像进行去除噪声处理后的图像;
图4为将图3二值化处理后的二值图像;
图5为图3中的去除地板边缘线后的图像;
图6为本发明自移动表面行走机器人结构框图。
具体实施方式
实施例一
图1为本发明实施例一图像处理方法的流程图,如图1所示并结合图4-5所示,图像处理方法,包括如下步骤:
S1:机器人通过图像采集单元(如摄像头)采集环境图像,该环境图像为灰度图像,其中图像内包括如门、箱子、地板等物体的影像;
S2:将环境图像进行边缘二值化处理,得到含边缘像素点和背景像素点的二值图像(如图3所示),在本步骤中,可利用Canny边缘检测算子法、Roberts梯度法、Sobel边缘检测算子法或Laplacian算法对环境图像进行计算,所述二值图像为仅包括两种灰度值的灰度图像,即原采集到的图像中物体的边缘线(如门、箱子、地板等物体的边 缘线)在该二值图像中体现为同一灰度值,所述灰度值可自行设置,只要能够与背景灰度值加以区分即可,例如,在本实施例中将边缘像素点灰度值设定为255,背景像素点灰度值设置为0;
S3:扫描该二值图像,得到间距不大于边缘像素点最大像素宽度阈值的两个相邻的边缘像素点A和B,本步骤具体包括:
S3.1:扫描该二值图像,找出灰度值为255的像素点A;
S3.2:以像素点A为起始点,判断在该起始点向外延伸m个像素宽度内是否有灰度值为255的像素点B,其中m为预设的边缘像素点最大像素宽度阈值若是则进入下一步骤,若否则返回S3.1,本实施例中,S3.2中的像素宽度阈值m设为50,需要说明的是:m值的设定是根据一条地板边缘线可能某一端离摄像头远,另一端离摄像头近,因此某条实际上宽度基本恒定的地板边缘线在摄像头拍摄的图片里,其宽度就可能是宽窄不一的,随着离摄像头的距离变大,地板边缘线会逐渐变窄。而这里50取得是整条地板边缘线在摄像头拍摄的图里面距离最大的值,当然,这里50是根据某一环境中地板边缘线的实际宽窄设定的,在不一样的使用环境中,用户可以自行设定参数m的值。
找到了满足缝隙像素宽度要求的两个点后,接下来要判断这两点是否位于相邻两块地板上,即进入步骤S4;
S4:判断像素点A和B是否为相邻两个地板边缘像素点,若是,则进入S5,若否,则返回S3,本步骤具体包括:
S4.1:根据S3中像素点A、B,在S1中的所采集的环境图像中找到与像素点A、B对应的像素点A0、B0;
S4.2:找到像素点A0、B0后,各自向外延展P个像素宽度的像素点得到像素点C0、D0;P的取值范围可大于等于3且小于等于6,本实施例中P设置为5,若是S3中是按行扫描,则以像素点A0为基础,向左/右移动5个像素距离得到C0,以像素点B0为基础,向右/左移动5个像素距离得到D0;
若在S3中是按列扫描,则以像素点A0为基础,向上/下移动5个像素距离得到C0,以像素点B0为基础,向下/上移动5个像素距离得到D0;
S4.3:判断像素点C0、D0的灰度值是否在(K-v,K+v)的范围内,其中K为地板灰度平均值,v为预设的色差范围,若是则进入步骤S4.4,若否则返回S3.1;
S4.4:判断像素点C0与D0的灰度差值是否<=n,其中n为环境图像中相邻两块地板的灰度值最大差值,若是则判断像素点A和B为相邻两个地板边缘像素点并进入 步骤S5,若否则返回S3.1,本实施例中,在S1中的采集环境图像中相邻两块地板的灰度值最大差值n设为10,n值的设定是根据相邻两块地板可能存在微小色差而预先设定的;
S5:消除S3中(即二值图中)的地板边缘像素点A和B,具体消除方法为:将像素点A、B的灰度值设置为0;
S6:重复上述步骤:S4、S5和S6,直至消除二值图像中的所有地板边缘像素点(如图5所示的图像)。
需要说明的是,S2中扫描二值图像的方法为先逐行扫描再逐列扫描,或者先逐列扫描再逐行扫描,以避免遗漏扫描,彻底消除图像中横向或竖向的地板边缘线。本实施例中,在如图4的二值图像中按行扫描获得一个灰度值为255(对应白色)的像素点A,那么就将像素点A作为起始点i,随即查找i+50内是否有灰度值为255的像素点,若有符合的像素点B,将像素点B设为结束点j(如果没找到符合的像素点,则撇掉像素点A,寻找下一个灰度值为255的像素点),然后在采集的环境图像中找到与像素点A、B点分别对应的点像素点A0、B0,基于采集的环境图像计算像素点A0、B0各自向外延展5个像素宽度处像素点(即位于A0左边5个像素距离处的像素点C0和位于像素点B0右边5个像素距离处的像素点D0)的灰度值的差值,当然本领局技术人员根据需要可自行设定向外延展的像素宽度。
为了判定像素点A、B(或像素点A0、B0)的两边是否都是地板,需要判定:
1)判定像素点C0和D0的灰度值是否落入K-v与K+v之间,如果是,则视为像素点C0和D0是地板上的像素点,否则视为C0和D0不是地板上的像素点,这里K表示采集的环境图像地板的灰度值,K值的确定最好在采集的环境图像中地板像素点中取几个样本,计算其灰度值的平均值;v表示设定的一个色差范围,可自行设定;
2)判定像素点C0和D0的差值满足<=n,则视为像素点C0、D0位于相邻两块地板上(若C0和D0的差值不满足<=n,视为像素点A、B的两边并非都是地板,则撇掉像素点A,继续寻找下一个灰度值为255的像素点),从而确定点A,B为地板边缘线上的像素点。
当同时满足上述两个判断条件时,才可以确定像素点A、B为相邻两个地板砖地板边缘线上的像素点。确定像素点A、B为相邻两个地板砖地板边缘线上的像素点后,将图3中像素点A,B点的灰度值置为0(对应黑色),即消除该地板边缘像素点A、B。
按照先逐行扫描再逐列扫描的扫描顺序,首先逐步消除了该行上所有地板边缘像 素点A、B两个像素点,最终当逐列扫描完成时,包括该地板边缘的全部列的地板边缘线将全部被消除,消除地板缝隙线后得到的图像如图4所示。而逐列扫描地板边缘消除的方法与逐行扫描地板边缘消除的方法一致,在此不再赘述。
另外需要说明的是,本申请描述的图像处理方法可能将地板上的某个细长型障碍物误判为地板边缘线,但并不影响本方法在实践中的应用,原因是:只有当细长型障碍物满足如下两个条件时才可能会被误判为地板边缘线,a、该细长型障碍物的宽度小于地板边缘线;b、该细长型障碍物的高度基本为0,即平面型,否则吸尘器依据其竖直方向上的边缘依然可以检测到该障碍物;满足上述2个条件的细长型障碍物在实际中较为罕见,且即使真出现了上述障碍物,这种障碍物也不会影响到吸尘器的工作,原因是吸尘器使用上述方法消除地板边缘线是为了避免将地板边缘线误判为障碍物而改变行走路线,防止撞物/墙等,因此吸尘器将上述细长型障碍物误判为地板边缘线并将其消除之后,就会直接从该细长型障碍物上走过去,这一过程不会使吸尘器产生碰撞,反而可以使该细长型障碍物被清扫掉。
实施例二
本实施例与实施例一基本相同,不同之处在于:在S2之前还包括:
S1’:将S1中采集的环境图像进行除噪处理(如图2所示),在本步骤中,可利用高斯滤波法、中值滤波法或均值滤波法对图像进行去除噪声处理,上述滤波法均为常用技术手段,不再赘述;需要说明的是,该除噪处理步骤可根据实际需求增加或省略,如采用分辨率较高的摄像头采集环境图像(即摄像头自身采集图像相当于已除噪)。
图6为本发明自移动表面行走机器人结构框图,如图6所示,本发明提供一种自移动表面行走机器人,所述机器人包括:图像采集单元1、行走单元2、驱动单元3、功能部件4和控制单元5;
所述控制单元5分别与所述功能部件4、图像采集单元1和驱动单元3相连接,驱动单元3与所述的行走单元2相连接,所述驱动单元3接受控制单元5的指令,驱动所述行走单元2行走,所述功能部件4接受控制单元5的指令按预定的行走模式进行表面行走,所述的功能部件4为清扫部件、打蜡部件、安保报警部件、空气净化部件或/和磨光部件,所述控制单元5对图像采集单元1采集到的图像进行处理;所述自移动表面行走机器人采用上述两个实施例中的图像处理方法。当消除采集图像中的地板裂缝线后,机器人在地面行走就更加方便,不会误认为地板缝隙为障碍物而进行避障动作。

Claims (10)

  1. 一种应用于自移动表面行走机器人的图像处理方法,其特征在于,包括如下步骤:
    S1:机器人采集环境图像;
    S2:对环境图像进行边缘二值化处理,得到含边缘像素点和背景像素点的二值图像;
    S3:扫描该二值图像,得到间距不大于边缘像素点最大像素宽度阈值的两个相邻的边缘像素点A和B;
    S4:判断像素点A和B是否为相邻两个地板边缘像素点,若是,则进入S5,若否,则返回S3;
    S5:消除S4中的地板边缘像素点A和B;
    S6:重复上述步骤:S3、S4和S5,直至消除二值图像中的所有地板边缘像素点。
  2. 如权利要求1所述的图像处理方法,其特征在于,S3中所述的扫描二值图像的方法为先逐行扫描再逐列扫描,或者先逐列扫描再逐行扫描。
  3. 如权利要求1所述的图像处理方法,其特征在于,S3具体包括:
    S3.1:扫描该二值图像,找出灰度值为255的像素点A;
    S3.2:以像素点A为起始点,判断在该起始点向外延伸m个像素宽度内是否有灰度值为255的像素点B,其中m为预设的边缘像素点最大像素宽度阈值,若是则进入步骤S4,若否则返回S3.1。
  4. 如权利要求1所述的图像处理方法,其特征在于,S4具体包括:
    S4.1:根据S3中像素点A、B,在环境图像中找到与像素点A、B对应的像素点A0、B0;
    S4.2:找到像素点A0、B0后,各自向外延展P个像素宽度的得到像素点C0、D0;
    S4.3:判断像素点C0、D0的灰度值是否在(K-v,K+v)的范围内,其中K为地板灰度平均值,v为预设的色差范围,若是则进入步骤S4.4,若否则返回S3;
    S4.4:判断像素点C0与D0的灰度差值是否<=n,其中n为环境图中相邻两块地板的灰度值最大差值,若是则判断像素点A和B为相邻两个地板边缘像素点并进入步骤S5,若否则返回S3。
  5. 如权利要求1所述的图像处理方法,其特征在于,S5中消除S4中的地板边缘像素点A和B的方法为:在二值图像中将A、B的像素值设置为0。
  6. 如权利要求1所述的图像处理方法,其特征在于,S2中利用Canny边缘检测算子法、Roberts梯度法、Sobel边缘检测算子法或Laplacian算法对环境图像进行计算,得到二值图像。
  7. 如权利要求1所述的图像处理方法,其特征在于,在S2之前还包括S1’:将采集的环境图像进行除噪处理。
  8. 如权利要求7所述的图像处理方法,其特征在于,S1’中利用高斯滤波法、中值滤波法或均值滤波法对环境图像进行除噪处理。
  9. 一种自移动表面行走机器人,所述机器人包括:图像采集单元(1)、行走单元(2)、驱动单元(3)、功能部件(4)和控制单元(5);
    所述控制单元(5)分别与所述功能部件(4)、图像采集单元(1)和驱动单元(3)相连接,驱动单元(3)与所述的行走单元(2)相连接,所述驱动单元(3)接受控制单元(5)的指令,驱动所述行走单元(2)行走,所述功能部件(4)接受控制单元(5)的指令按预定的行走模式进行表面行走,所述控制单元(5)对图像采集单元(1)采集到的图像进行处理;
    其特征在于,所述自移动地面处理机器人采用权利要求1-8任一项所述的图像处理方法。
  10. 如权利要求9所述的自移动表面行走机器人,其特征在于,所述的功能部件(4)为清扫部件、打蜡部件、安保报警部件、空气净化部件或/和磨光部件。
PCT/CN2015/088757 2014-09-05 2015-09-01 自移动表面行走机器人及其图像处理方法 WO2016034104A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410452920.7 2014-09-05
CN201410452920.7A CN105467985B (zh) 2014-09-05 2014-09-05 自移动表面行走机器人及其图像处理方法

Publications (1)

Publication Number Publication Date
WO2016034104A1 true WO2016034104A1 (zh) 2016-03-10

Family

ID=55439138

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/088757 WO2016034104A1 (zh) 2014-09-05 2015-09-01 自移动表面行走机器人及其图像处理方法

Country Status (2)

Country Link
CN (1) CN105467985B (zh)
WO (1) WO2016034104A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813103A (zh) * 2020-06-08 2020-10-23 珊口(深圳)智能科技有限公司 移动机器人的控制方法、控制系统及存储介质
CN115407777A (zh) * 2022-08-31 2022-11-29 深圳银星智能集团股份有限公司 分区优化方法及清洁机器人

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106006266B (zh) * 2016-06-28 2019-01-25 西安特种设备检验检测院 一种应用于电梯安全监控的机器视觉建立方法
CN109797691B (zh) * 2019-01-29 2021-10-01 浙江联运知慧科技有限公司 一种无人清扫车及其行车方法
CN111067439B (zh) * 2019-12-31 2022-03-01 深圳飞科机器人有限公司 障碍物处理方法以及清洁机器人
CN113496146A (zh) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN113807118B (zh) * 2020-05-29 2024-03-08 苏州科瓴精密机械科技有限公司 机器人沿边工作方法、系统,机器人及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3922126B2 (ja) * 2002-07-30 2007-05-30 松下電器産業株式会社 絨毯目検出装置及びこれを用いた移動ロボット
CN102541063A (zh) * 2012-03-26 2012-07-04 重庆邮电大学 缩微智能车辆寻线控制方法和装置
CN102613944A (zh) * 2012-03-27 2012-08-01 复旦大学 清洁机器人脏物识别系统及清洁方法
US20130338831A1 (en) * 2012-06-18 2013-12-19 Dongki Noh Robot cleaner and controlling method of the same
CN103853154A (zh) * 2012-12-05 2014-06-11 德国福维克控股公司 可行走的清洁设备和运行这种设备的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739560B (zh) * 2009-12-16 2012-02-01 东南大学 基于边缘和骨架信息的车辆阴影消除方法
CN103150560B (zh) * 2013-03-15 2016-03-30 福州龙吟信息技术有限公司 一种汽车智能安全驾驶的实现方法
CN103679167A (zh) * 2013-12-18 2014-03-26 杨新锋 一种ccd图像处理的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3922126B2 (ja) * 2002-07-30 2007-05-30 松下電器産業株式会社 絨毯目検出装置及びこれを用いた移動ロボット
CN102541063A (zh) * 2012-03-26 2012-07-04 重庆邮电大学 缩微智能车辆寻线控制方法和装置
CN102613944A (zh) * 2012-03-27 2012-08-01 复旦大学 清洁机器人脏物识别系统及清洁方法
US20130338831A1 (en) * 2012-06-18 2013-12-19 Dongki Noh Robot cleaner and controlling method of the same
CN103853154A (zh) * 2012-12-05 2014-06-11 德国福维克控股公司 可行走的清洁设备和运行这种设备的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813103A (zh) * 2020-06-08 2020-10-23 珊口(深圳)智能科技有限公司 移动机器人的控制方法、控制系统及存储介质
CN115407777A (zh) * 2022-08-31 2022-11-29 深圳银星智能集团股份有限公司 分区优化方法及清洁机器人

Also Published As

Publication number Publication date
CN105467985A (zh) 2016-04-06
CN105467985B (zh) 2018-07-06

Similar Documents

Publication Publication Date Title
WO2016034104A1 (zh) 自移动表面行走机器人及其图像处理方法
CN107569181B (zh) 一种智能清洁机器人及清扫方法
CN107424144B (zh) 基于激光视觉的焊缝跟踪图像处理方法
WO2021114508A1 (zh) 一种巡线机器人视觉导航巡检和避障方法
CN109460709B (zh) 基于rgb和d信息融合的rtg视觉障碍物检测的方法
CN107462223B (zh) 一种公路转弯前行车视距自动测量装置及测量方法
JP4811201B2 (ja) 走路境界線検出装置、および走路境界線検出方法
CN104916163B (zh) 泊车位检测方法
CN105740782B (zh) 一种基于单目视觉的驾驶员换道过程量化方法
CN109344687B (zh) 基于视觉的障碍物检测方法、装置、移动设备
CN106326822B (zh) 车道线检测的方法及装置
KR101609303B1 (ko) 카메라 캘리브레이션 방법 및 그 장치
JP6690955B2 (ja) 画像処理装置及び水滴除去システム
CN109159137B (zh) 一种可视频评估洗地效果的洗地机器人
CN104268860B (zh) 一种车道线检测方法
WO2014002692A1 (ja) ステレオカメラ
CN112056991A (zh) 机器人的主动清洁方法、装置、机器人和存储介质
CN111242888A (zh) 一种基于机器视觉的图像处理方法及系统
CN112634269A (zh) 一种轨道车辆车体检测方法
JP2020109542A (ja) 付着物検出装置および付着物検出方法
Sebdani et al. A robust and real-time road line extraction algorithm using hough transform in intelligent transportation system application
JP2008160635A (ja) カメラ状態検出方法
KR102504411B1 (ko) 오염물 인식장치
KR101284252B1 (ko) 영상 곡률 공간정보를 이용한 코너검출방법
CN114639003A (zh) 基于人工智能的工地车辆清洁度判断方法、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15837743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15837743

Country of ref document: EP

Kind code of ref document: A1