WO2019104693A1 - 视觉扫地机器人及建立场景地图的方法 - Google Patents

视觉扫地机器人及建立场景地图的方法 Download PDF

Info

Publication number
WO2019104693A1
WO2019104693A1 PCT/CN2017/114077 CN2017114077W WO2019104693A1 WO 2019104693 A1 WO2019104693 A1 WO 2019104693A1 CN 2017114077 W CN2017114077 W CN 2017114077W WO 2019104693 A1 WO2019104693 A1 WO 2019104693A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
scene map
stationary object
picture
map
Prior art date
Application number
PCT/CN2017/114077
Other languages
English (en)
French (fr)
Inventor
王声平
张立新
周毕兴
Original Assignee
深圳市沃特沃德股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市沃特沃德股份有限公司 filed Critical 深圳市沃特沃德股份有限公司
Priority to PCT/CN2017/114077 priority Critical patent/WO2019104693A1/zh
Publication of WO2019104693A1 publication Critical patent/WO2019104693A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the invention relates to the field of sweeping robots, in particular to a visual sweeping robot and a method for establishing a scene map.
  • the sweeping robot When the sweeping robot is positioned in the cleaning environment to establish a scene map, it is generally assumed that the environment is static.
  • the current posture is obtained by matching the static information in the environment, and when there is a moving dynamic object in the environment, for example, moving around at home. The person or pet, the matching gesture will have a large error.
  • the object in the construction of the map, if there is a moving object in front of the sweeping robot, the object will also be marked as a map point on the map, which will have a relatively large impact on the path rules of the sweeping robot.
  • the main object of the present invention is to provide a method for establishing a scene map by a cleaning robot, and the scene map created by the method has no moving objects.
  • the invention provides a method for establishing a scene map by a cleaning robot, comprising the steps of:
  • a scene map is established according to the effective area in the picture.
  • the step of determining the area corresponding to the still object in the picture, marked as the effective area includes:
  • the moving object is ignored in the area corresponding to the picture, and the remaining area in the picture is obtained as an area corresponding to the stationary object, and the effective area is formed.
  • the step of establishing a scene map according to the effective area in the picture includes:
  • a scene map is created based on the position of the stationary object in the scene map.
  • the step of calculating the position of the stationary object in the scene map by using the internal reference of the camera in combination with the position of the stationary object in the picture includes:
  • the position of the stationary object in the scene map is calculated.
  • the internal reference of the camera includes a focal length and an aperture center of the camera.
  • the step of establishing a scene map according to the effective area in the picture includes:
  • the step of calculating the position of the stationary object relative to the camera includes:
  • step of calculating, according to the preset formula, the position of the stationary object in the space scene map comprises:
  • the position of the stationary object in the spatial scene map is obtained by adding coordinates of the stationary object relative to the camera [x, y, z] to the coordinates of the camera in the three-dimensional system.
  • the invention also provides a visual cleaning robot, comprising:
  • a vision system for capturing pictures taken during the cleaning process
  • a determining system configured to determine an area corresponding to the stationary object in the picture, marked as an effective area
  • the map system establishes a scene map based on the effective area marked by the system.
  • the determining system includes:
  • An optical flow module configured to extract a moving object in the picture by using an optical flow method
  • the ignoring module is configured to ignore the moving object in an area corresponding to the picture, and obtain an area corresponding to the still object in the picture, and form the effective area.
  • map system includes:
  • An internal reference module for calculating a position of the stationary object in the scene map by using an internal parameter of the camera in combination with a position of the stationary object in the picture;
  • a module is created for establishing a scene map according to the position of the stationary object in the scene map.
  • a location unit configured to acquire a location of the camera in the scene map
  • a first calculating unit configured to calculate, according to an internal parameter of the camera, a position of the stationary object relative to the camera
  • a second calculating unit configured to calculate, according to the preset formula, a position of the stationary object in the scene map.
  • the internal reference of the camera includes a focal length and an aperture center of the camera.
  • map system includes:
  • a coordinate system module configured to establish a three-dimensional coordinate system according to the picture
  • a marking module configured to acquire coordinates of the stationary object in the three-dimensional coordinate system, and mark the three-dimensional coordinate system to form the scene map
  • the first computing unit includes:
  • a formula subunit for calculating coordinates [x, y, z] of the stationary object relative to the camera in a three-dimensional coordinate system according to a specified formula the specified formula is:
  • f x and f y refer to the focal length of the camera on the x and y axes
  • c x and c y refer to the aperture center of the image head
  • [u, v, d] are the pixel coordinates in the picture.
  • the second calculating unit includes:
  • the visual cleaning robot of the present invention establishes a scene map, and the moving objects in the cleaning environment are not established in the scene map, so that the established scene map is more accurate, so that the user avoids moving the family members or Pets are built into the scene map to provide more efficient and accurate path rules for subsequent planned cleaning paths.
  • FIG. 1 is a schematic diagram showing the steps of a method for establishing a map by a visual cleaning robot according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the steps of a method for establishing a map by a visual cleaning robot according to an embodiment of the invention
  • FIG. 3 is a schematic diagram showing the steps of a method for establishing a map by a visual cleaning robot according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing the steps of a method for establishing a map by a visual cleaning robot according to an embodiment of the present invention
  • FIG. 5 is a schematic structural view of a visual cleaning robot according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural view of a visual cleaning robot according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural view of a visual sweeping robot according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural view of a visual cleaning robot according to an embodiment of the present invention.
  • a method for establishing a scene map by a visual cleaning robot including the steps of:
  • the visual system of the visual sweeping robot can perform photographing.
  • the vision system takes a picture, and a scene map is created according to the photograph taken.
  • the map objects in the cleaning environment are reflected in the map.
  • the cleaning environment is the user's house, and various objects such as tables, chairs, and televisions in the house are photographed by the visual system, and then the visual system is used to judge whether the object in the figure moves. If the object is in a moving state, the object is determined not to belong.
  • the fixed environment acquires the outline of the object, removes the outline from the picture, determines that the moving object is an invalid area, and determines that the area corresponding to the outline of the stationary object is an effective area.
  • the method for determining that the area corresponding to the still object in the picture is the effective area may be that the visual system separately collects two pictures, compares the two pictures, and finds that the object is in a different position in the picture, and then determines that the object is a moving object.
  • the step of determining that an area corresponding to a stationary object in the picture is an effective area includes:
  • the moving object is ignored in an area corresponding to the picture, and an area corresponding to the still object in the picture is obtained, and the effective area is formed.
  • the optical flow method assigns a velocity vector to each pixel in the picture to form a motion vector field.
  • the sweeping robot can dynamically analyze the image according to the velocity vector characteristics of each pixel in the captured image. If there are no moving targets in the image, the optical flow vector is continuously varied throughout the image area.
  • the velocity vector formed by the moving object must be different from the velocity vector of the background, so that the position of the moving object in the image can be calculated.
  • the area corresponding to the stationary object in the picture is divided and confirmed as the effective area. Ignore the area corresponding to the moving object in the picture, and get the area corresponding to the remaining static objects in the picture. Only the area corresponding to the stationary object is established in the established map, ignoring the motion. The area corresponding to the object.
  • the step of establishing a scene map according to an effective area in the picture includes:
  • the internal reference of the camera refers to the internal parameters of the camera, such as the fixed coefficient of the focal length and aperture of the camera, corresponding to the number when the photo is taken.
  • the position of the stationary object in the scene map can be calculated. After calculating the position of the stationary object in the scene map, a scene map is established based on the position information of the stationary objects.
  • the step of calculating the position of the stationary object in the scene map by using the internal reference of the camera in combination with the position of the stationary object in the picture includes:
  • the position of the stationary object relative to the camera is calculated according to the internal parameter, and then the moving path of the visual sweeping robot is acquired, and the position of the camera of the visual sweeping robot in the environment can be calculated, and then according to the preset formula, the calculation can be performed.
  • the internal reference of the camera includes a focal length and an aperture center of the camera.
  • the focal length of the camera and the aperture center parameter value it is possible to determine the positional information such as the relative distance and the relative angle of the cleaning robot when the photograph is taken, and calculate the position of the stationary object in the picture relative to the camera.
  • the step of establishing a scene map according to the effective area in the picture includes:
  • the step of calculating the position of the stationary object relative to the camera includes:
  • f x and f y refer to the focal length of the camera on the x and y axes
  • c x and c y refer to the aperture center of the image head
  • [u, v, d] are the pixel coordinates in the picture.
  • the position of the camera is taken as the coordinate origin when the picture is taken, a three-dimensional coordinate system is established, and the still object in the picture is marked in the three-dimensional coordinate system, and the three-dimensional coordinate system is the scene map.
  • the three-dimensional coordinate system uses the wall as the X-axis and the Y-axis plane, and the axis perpendicular to the wall surface is the Z-axis, and a scale is set every fixed distance.
  • Calculate the position of the stationary object relative to the camera in the form of coordinate scale Specifically, calculate the distance between the stationary object and the camera in the cleaning environment, and then according to the focal length of the camera and the center of the aperture, the stationary object is obtained. Clean the front-rear distance and left-right distance of the camera relative to the camera.
  • step of calculating, according to the preset formula, the position of the stationary object in the space scene map comprises:
  • the coordinates of the camera in the three-dimensional coordinate system are first acquired, for example, the coordinates of the camera when photographing is [1, 0, 1], and the coordinates of the stationary object relative to the camera calculated by the above formula are [ 2,0,3], that is, if the camera is used as the coordinate origin, the coordinates of the stationary object are [2, 0, 3]. Therefore, if the static object is calculated relative to the actual coordinate origin, the coordinates of the camera should be added. Finally, the coordinates of the stationary object are calculated as [3, 0, 4], and then the coordinates are marked in the three-dimensional coordinate system. According to this method, the position of the stationary object in the picture relative to the origin is calculated multiple times. After many calculations, there are many points in the three-dimensional coordinate system, that is, the entire scene map can be established.
  • the visual cleaning robot of the present invention establishes a scene map, and the moving objects in the cleaning environment are not built in the scene map, so that the established scene map is more accurate, so that the user avoids the establishment of the sports family or pets during use.
  • the scene map provide more efficient and accurate path rules for subsequent planned cleaning paths.
  • the present invention also provides a visual sweeping robot comprising:
  • Vision system 1 for collecting pictures taken during the cleaning process
  • the determining system 2 is configured to determine an area corresponding to the stationary object in the picture, and mark the effective area;
  • the map system 3 establishes a scene map based on the effective area marked by the determination system 2.
  • the visual sweeping robot vision system 1 can perform photographing.
  • the vision system 1 takes a picture, and a scene map is created according to the photograph taken.
  • the map objects in the cleaning environment are reflected in the map.
  • the cleaning environment is the user's house, and various objects such as tables, chairs, and televisions in the house are photographed by the visual system, and then the determination system 2 uses the vision system 1 to determine whether the object in the figure moves, and if the object is in a moving state, Determining that the object is not in a fixed environment, acquiring the outline of the object, and removing the outline from the picture, determining that the moving object is an invalid area, and determining that the area corresponding to the outline of the stationary object is an effective area, When the map system 3 establishes the scene map, only the portion of the effective area in the picture is acquired.
  • the method for determining that the area corresponding to the still object in the picture is the effective area may be that the visual system separately collects two pictures, compares the two pictures, and finds that the object is in a different position in the picture, and then determines that the object is a moving object.
  • the determination system 2 includes:
  • the optical flow module 21 is configured to extract a moving object in the picture by using an optical flow method
  • the ignoring module 22 is configured to ignore the moving object in an area corresponding to the picture, and obtain an area corresponding to the still object in the picture, and form the effective area.
  • the optical flow method assigns a velocity vector to each pixel in the picture to form a motion vector field.
  • the sweeping robot can dynamically analyze the image according to the velocity vector characteristics of each pixel in the captured image. If there are no moving targets in the image, the optical flow vector is continuously varied throughout the image area. When there is a moving object in the image, there is relative motion between the target and the background. The velocity vector formed by the moving object must be different from the velocity vector of the background, so that the optical flow module 21 can calculate the position of the moving object in the image.
  • the area corresponding to the stationary object in the picture is passed through the optical flow module 21 It is divided and confirmed as a valid area.
  • the ignoring module 22 ignores the area corresponding to the moving object in the picture, obtains the area corresponding to the remaining still objects in the picture, establishes the area corresponding to the stationary object in the established map, and ignores the area corresponding to the moving object.
  • the map system 3 includes:
  • the internal reference module 31 is configured to calculate, by using an internal parameter of the camera, a position of the stationary object in the scene map in combination with a position of the stationary object in the picture;
  • the establishing module 32 is configured to establish a scene map according to the position of the stationary object in the scene map.
  • the internal reference of the camera refers to the internal parameters of the camera, such as the fixed coefficient of the focal length and aperture of the camera, corresponding to the number when the photo is taken.
  • the internal reference module 31 can calculate the position of the stationary object in the scene map by using the internal reference of the camera when taking a photo, combined with the position of the stationary object in the picture. After calculating the position of the stationary object in the scene map, the building module 32 creates a scene map based on the position information of the stationary objects.
  • the internal parameter module 31 includes:
  • a location unit 311, configured to acquire a location of the camera in the scene map
  • a first calculating unit 312 configured to calculate, according to an internal parameter of the camera, a position of the stationary object relative to the camera;
  • the second calculating unit 312 is configured to calculate, according to a preset formula, a position of the stationary object in the scene map.
  • the position unit 311 calculates the position of the stationary object relative to the camera according to the internal parameter, and then acquires the moving path of the visual cleaning robot.
  • the first calculating unit 312 can calculate the position of the camera of the visual cleaning robot in the environment, and then according to Based on the preset formula, the second calculating unit 313 can calculate the position of the stationary object in the image in the environment, thereby obtaining the position of the stationary object in the scene map.
  • the internal reference of the camera includes a focal length and an aperture center of the camera.
  • the focal length of the camera and the center of the aperture it is possible to determine the positional information such as the relative distance and the relative angle of the cleaning robot when the photograph is taken, and calculate the position of the stationary object in the picture relative to the camera.
  • map system 3 includes:
  • a coordinate system module configured to establish a three-dimensional coordinate system according to the picture
  • a marking module configured to acquire coordinates of the stationary object in the three-dimensional coordinate system, and mark the three-dimensional coordinate system to form the scene map
  • the first calculating unit 312 includes:
  • a formula subunit for calculating coordinates [x, y, z] of the stationary object relative to the camera in a three-dimensional coordinate system according to a specified formula the specified formula is:
  • f x and f y refer to the focal length of the camera on the x and y axes
  • c x and c y refer to the aperture center of the image head
  • [u, v, d] are the pixel coordinates in the picture.
  • the position of the camera is taken as the coordinate origin when the picture is taken, and the coordinate system module establishes a three-dimensional coordinate system, and the mark module marks the still object in the picture in the three-dimensional coordinate system, and the three-dimensional coordinate system is Scene map.
  • the three-dimensional coordinate system uses the wall as the X-axis and the Y-axis plane, and the axis perpendicular to the wall surface is the Z-axis, and a scale is set every fixed distance.
  • the first calculating unit 312 calculates the position of the stationary object relative to the camera in the form of a coordinate scale.
  • the upper and lower distances of the stationary object relative to the camera in the cleaning environment are calculated, and then the formula sub-unit according to the focal length and aperture of the camera. Center, the front-rear distance and left-right distance of the stationary object relative to the camera in the cleaning environment.
  • the second calculating unit 313 includes:
  • the coordinates of the camera in the three-dimensional coordinate system are first acquired, for example, the coordinates of the camera when photographing is [1, 0, 1], and the coordinates of the stationary object relative to the camera calculated by the above formula are [ 2,0,3], that is, if the camera is used as the coordinate origin, the coordinates of the stationary object are [2, 0, 3]. Therefore, if the static object is calculated relative to the actual coordinate origin, the coordinates of the camera should be added. Finally, the summation subunit calculates the coordinates of the stationary object as [3, 0, 4], and then marks the coordinates in the three-dimensional coordinate system. Come. According to this method, the position of the stationary object in the picture relative to the origin is calculated multiple times. After many calculations, there are many points in the three-dimensional coordinate system, that is, the entire scene map can be established.
  • the visual cleaning robot of the present invention establishes a scene map, and the moving objects in the cleaning environment are not built in the scene map, so that the established scene map is more accurate, so that the user avoids the establishment of the sports family or pets during use.
  • the scene map provide more efficient and accurate path rules for subsequent planned cleaning paths.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

一种视觉扫地机器人以及建立地图的方法,其中建立地图方法是:扫地机器人采集图片,在建立地图中将图片中运动的物体忽略,不建立在地图里。该方法使建立的场景地图更加准确。

Description

视觉扫地机器人及建立场景地图的方法 技术领域
本发明涉及到扫地机器人领域,特别是涉及到一种视觉扫地机器人及建立场景地图的方法。
背景技术
扫地机器人在清扫环境中进行定位建立场景地图时,一般假设环境是静态的,通过获取环境中静态信息来匹配计算得出当前姿态,而当环境中出现不断运动的动态物体时,例如在家里走动的人或宠物,匹配得出的姿态就会有较大误差。而且,在建图时,如果扫地机器人前方存在移动的物体,也会将此物体作为地图点标记在地图上,这样对扫地机器人的路径规则也会造成比较大的影响。
技术问题
本发明的主要目的为提供一种扫地机器人建立场景地图的方法,该方法建立的场景地图中没有运动的物体。
问题的解决方案
技术解决方案
本发明提出一种扫地机器人建立场景地图的方法,包括步骤:
采集图片;
判定所述图片中静止的物体所对应的区域,标记为有效区域;
根据所述图片中有效区域建立场景地图。
进一步地,所述判定所述图片中静止的物体所对应的区域,标记为有效区域的步骤包括:
采用光流法提取所述图片中的运动物体;
将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
进一步地,所述根据所述图片中有效区域建立场景地图的步骤包括:
利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的 物体在场景地图中的位置;
根据所述静止的物体在场景地图中的位置建立场景地图。
进一步地,所述利用摄像头的内参结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置的步骤包括:
获取摄像头在所述场景地图中的位置;
根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
进一步地,所述摄像头的内参包括所述摄像头的焦距和光圈中心。
进一步地,所述根据所述图片中有效区域建立场景地图的步骤包括:
根据所述图片,建立三维坐标体系;
获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
所述计算出所述静止的物体相对所述摄像头的位置的步骤包括:
根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
[根据细则26改正23.04.2018] 
Figure WO-DOC-FIGURE-1
其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
进一步地,所述根据预设公式,计算得出所述静止的物体在所述空间场景地图中位置的步骤包括:
用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
本发明还提出一种视觉扫地机器人,包括:
视觉系统,用于采集清扫过程中拍摄的图片;
判定系统,用于判定所述图片中静止的物体所对应的区域,标记为有效区域;
地图系统,根据判定系统标记的有效区域建立场景地图。
进一步地,所述判定系统包括:
光流模块,用于采用光流法提取所述图片中的运动物体;
忽略模块,用于将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
进一步地,所述地图系统包括:
内参模块,用于利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置;
建立模块,用于根据所述静止的物体在场景地图中的位置建立场景地图。
进一步地,所述内参模块包括:
位置单元,用于获取摄像头在所述场景地图中的位置;
第一计算单元,用于根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
第二计算单元,用于根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
进一步地,所述摄像头的内参包括所述摄像头的焦距和光圈中心。
进一步地,所述地图系统包括:
坐标体系模块,用于根据所述图片,建立三维坐标体系;
标记模块,用于获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
所述第一计算单元包括:
公式子单元,用于根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
[根据细则26改正23.04.2018] 
Figure WO-DOC-FIGURE-1
其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
进一步地,所述第二计算单元包括:
相加子单元,用于用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
发明的有益效果
有益效果
与现有技术相比,本发明的视觉扫地机器人建立场景地图,将清扫环境中运动的物体不建立在场景地图中,使建立的场景地图更加准确,使用户在使用时避免将运动的家人或宠物建立在场景地图中,为后续的规划清扫路径提供更高效准确的路径规则。
对附图的简要说明
附图说明
图1是本发明一实施例的视觉扫地机器人建立地图的方法的步骤示意图;
图2本是发明一实施例的视觉扫地机器人建立地图的方法的步骤示意图;
图3是本发明一实施例的视觉扫地机器人建立地图的方法的步骤示意图;
图4是本发明一实施例的视觉扫地机器人建立地图的方法的步骤示意图;
图5是本发明一实施例的视觉扫地机器人的结构示意图;
图6是本发明一实施例的视觉扫地机器人的结构示意图;
图7是本发明一实施例的视觉扫地机器人的结构示意图;
图8是本发明一实施例的视觉扫地机器人的结构示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
实施该发明的最佳实施例
本发明的最佳实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
参照图1,提出本发明一实施例的视觉扫地机器人建立场景地图的方法,包括步骤:
S1、采集图片;
S2、判定所述图片中静止的物体所对应的区域,标记为有效区域;
S3、根据所述图片中有效区域建立场景地图。
本实施例中,视觉扫地机器人的视觉系统,可以进行拍照。扫地机器人在清扫时,视觉系统拍照,并根据拍摄的照片建立场景地图,在地图中,清扫环境中的各物体都会在地图中体现。例如清扫环境是用户的房子,房子里的桌子、椅子、电视等各种物体均会被视觉系统拍摄,然后利用视觉系统判断图中的物体是否移动,若物体是移动状态,则判定物体不是属于固定的环境,获取该物体的外形轮廓,将该外形轮廓从图片中剔除掉,判定运动的物体是无效区域,判定静止的物体的外形轮廓对应的区域是有效区域,在建立场景地图时,只获取图片中有效区域的部分。判定图片中静止的物体所对应的区域为有效区域的方法,可以是视觉系统分别采集两张图片,将两张图片进行对比,发现物体在图片中位置不一样,则判定物体为运动的物体。
参照图2,进一步的,所述判定所述图片中静止的物体所对应的区域为有效区域的步骤包括:
S21、采用光流法提取所述图片中的运动物体;
S22、将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
本实施例中,光流法是给图片中的每个像素点赋予一个速度矢量,形成一个运动矢量场。扫地机器人根据拍摄的图像中各个像素点的速度矢量特征,可以对图像进行动态分析。如果图像中没有运动目标,则光流矢量在整个图像区域是连续变化的。当图像中有运动物体时,目标和背景存在着相对运动。运动物体所形成的速度矢量必然和背景的速度矢量有所不同,如此便可以计算出运动物体在图像中的位置。通过上述步骤将图片中静止物体对应的区域划分出来,确认为有效区域。将图片中运动的物体对应的区域忽略,得到图片中剩余的静止的物体对应的区域,在建立地图中只建立静止的物体对应的区域,忽略运动的 物体对应的区域。
参照图3,进一步地,所述根据所述图片中有效区域建立场景地图的步骤包括:
S31、利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置;
S32、根据所述静止的物体在场景地图中的位置建立场景地图。
本实施例中,摄像头的内参,是指摄像头的内部参数,比如摄像头的焦距、光圈等固定的系数在对应拍摄照片时的数字。利用拍摄照片时摄像头的内参,结合静止的物体在所述图片中的位置,可以计算出静止的物体在场景地图中的位置。计算出静止物体在场景地图中的位置后,根据这些静止物体的位置信息,建立场景地图。
参照图4,进一步地,所述利用摄像头的内参结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置的步骤包括:
S311、获取摄像头在所述场景地图中的位置;
S312、根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
S313、根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
本实施例中,根据内参计算出静止的物体相对摄像头的位置,再获取视觉扫地机器人的移动路径,可以计算出视觉扫地机器人的摄像头在环境中的位置,再根据预设的公式,即可以计算出图片中静止的物体在环境中的位置,从而得出静止物体在场景地图中的位置。
进一步地,所述摄像头的内参包括所述摄像头的焦距、光圈中心。
本实施例中,根据摄像头的焦距、光圈中心参数值,可以判定出扫地机器人在拍摄照片时,两者的相对距离以及相对角度等位置信息,计算出图片中的静止的物体相对摄像头的位置。
进一步地,所述根据所述图片中有效区域建立场景地图的步骤包括:
S32、根据所述图片,建立三维坐标体系;
S33、获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
所述计算出所述静止的物体相对所述摄像头的位置的步骤包括:
S3123、根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
[根据细则26改正23.04.2018] 
Figure WO-DOC-FIGURE-1
其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
本实施例中,摄像头在拍摄图片后,以拍摄图片时摄像头的位置为坐标原点,建立三维坐标体系,再将图片中静止的物体标记在三维坐标体系中,三维坐标体系就是场景地图。三维坐标体系以墙壁为X轴和Y轴平面,与墙壁面垂直的轴为Z轴,每隔一段固定距离设置有一个标度。以坐标标度的形式计算出静止的物体相对摄像头的位置,具体的,计算出静止的物体在清扫环境中相对摄像头的上下距离,然后再根据摄像头的焦距以及光圈中心,得出静止的物体在清扫环境中相对摄像头的前后距离和左右距离。
进一步地,所述根据预设公式,计算得出所述静止的物体在所述空间场景地图中位置的步骤包括:
S3124、用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
本实施例中,在计算时,先获取摄像头在三维坐标体系中的坐标,例如摄像头拍照时的坐标是[1,0,1],经过上述公式计算出的静止的物体相对摄像头的坐标是[2,0,3],即如果以摄像头为坐标原点的话,静止的物体坐标是[2,0,3],因此计算出静止的物体相对实际的坐标原点的话,则应加上摄像头的坐标,最终计算出静止的物体的坐标为[3,0,4],然后将该坐标在三维坐标体系中标记出来。依此方法,多次计算出图片中静止的物体相对原点的位置,经过多次计算后,该三维坐标体系中就有很多个点,即可以建立出整个场景地图。
综上所述,本发明的视觉扫地机器人建立场景地图,将清扫环境中运动的物体不建立在场景地图中,使建立的场景地图更加准确,使用户在使用时避免将运动的家人或宠物建立在场景地图中,为后续的规划清扫路径提供更高效准确的路径规则。
参照图5,本发明还提出一种视觉扫地机器人,包括:
视觉系统1,用于采集清扫过程中拍摄的图片;
判定系统2,用于判定所述图片中静止的物体对应的区域,标记为有效区域;
地图系统3,根据判定系统2标记的有效区域建立场景地图。
本实施例中,视觉扫地机器人视觉系统1,可以进行拍照。扫地机器人在清扫时,视觉系统1拍照,并根据拍摄的照片建立场景地图,在地图中,清扫环境中的各物体都会在地图中体现。例如清扫环境是用户的房子,房子里的桌子、椅子、电视等各种物体均会被视觉系统拍摄,然后判定系统2利用视觉系统1判断图中的物体是否移动,若物体是移动状态,则判定物体不是属于固定的环境,获取该物体的外形轮廓,将该外形轮廓从图片中剔除掉,判定系统2判定运动的物体是无效区域,判定静止的物体的外形轮廓对应的区域是有效区域,地图系统3在建立场景地图时,只获取图片中有效区域的部分。判定图片中静止的物体所对应的区域为有效区域的方法,可以是视觉系统分别采集两张图片,将两张图片进行对比,发现物体在图片中位置不一样,则判定物体为运动的物体。
参照图6,进一步地,所述判定系统2包括:
光流模块21,用于采用光流法提取所述图片中的运动物体;
忽略模块22,用于将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
本实施例中,光流法是给图片中的每个像素点赋予一个速度矢量,形成一个运动矢量场。扫地机器人根据拍摄的图像中各个像素点的速度矢量特征,可以对图像进行动态分析。如果图像中没有运动目标,则光流矢量在整个图像区域是连续变化的。当图像中有运动物体时,目标和背景存在着相对运动。运动物体所形成的速度矢量必然和背景的速度矢量有所不同,如此光流模块21便可以计算出运动物体在图像中的位置。通过光流模块21将图片中静止物体对应的区域 划分出来,确认为有效区域。忽略模块22将图片中运动的物体对应的区域忽略,得到图片中剩余的静止的物体对应的区域,在建立地图中建立静止的物体对应的区域,忽略运动的物体对应的区域。
参照图7,进一步地,所述地图系统3包括:
内参模块31,用于利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置;
建立模块32,用于根据所述静止的物体在场景地图中的位置建立场景地图。
本实施例中,摄像头的内参,是指摄像头的内部参数,比如摄像头的焦距、光圈等固定的系数在对应拍摄照片时的数字。内参模块31利用拍摄照片时摄像头的内参,结合静止的物体在所述图片中的位置,可以计算出静止的物体在场景地图中的位置。计算出静止物体在场景地图中的位置后,建立模块32根据这些静止物体的位置信息,建立场景地图。
参照图8,进一步地,所述内参模块31包括:
位置单元311,用于获取摄像头在所述场景地图中的位置;
第一计算单元312,用于根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
第二计算单元312,用于根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
本实施例中,位置单元311根据内参计算出静止的物体相对摄像头的位置,再获取视觉扫地机器人的移动路径,第一计算单元312可以计算出视觉扫地机器人的摄像头在环境中的位置,再根据预设的公式,第二计算单元313可以计算出图片中静止的物体在环境中的位置,从而得出静止物体在场景地图中的位置。
进一步地,所述摄像头的内参包括所述摄像头的焦距和光圈中心。
本实施例中,根据摄像头的焦距和光圈中心,可以判定出扫地机器人在拍摄照片时,两者的相对距离以及相对角度等位置信息,计算出图片中的静止的物体相对摄像头的位置。
进一步地,所述地图系统3包括:
坐标体系模块,用于根据所述图片,建立三维坐标体系;
标记模块,用于获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
所述第一计算单元312包括:
公式子单元,用于根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
[根据细则26改正23.04.2018] 
Figure WO-DOC-FIGURE-1
其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
本实施例中,摄像头在拍摄图片后,以拍摄图片时摄像头的位置为坐标原点,坐标体系模块建立三维坐标体系,标记模块再将图片中静止的物体标记在三维坐标体系中,三维坐标体系就是场景地图。三维坐标体系以墙壁为X轴和Y轴平面,与墙壁面垂直的轴为Z轴,每隔一段固定距离设置有一个标度。第一计算单元312以坐标标度的形式计算出静止的物体相对摄像头的位置,具体的,计算出静止的物体在清扫环境中相对摄像头的上下距离,然后公式子单元再根据摄像头的焦距以及光圈中心,得出静止的物体在清扫环境中相对摄像头的前后距离和左右距离。
进一步地,所述第二计算单元313包括:
相加子单元,用于用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
本实施例中,在计算时,先获取摄像头在三维坐标体系中的坐标,例如摄像头拍照时的坐标是[1,0,1],经过上述公式计算出的静止的物体相对摄像头的坐标是[2,0,3],即如果以摄像头为坐标原点的话,静止的物体坐标是[2,0,3],因此计算出静止的物体相对实际的坐标原点的话,则应加上摄像头的坐标,最终相加子单元计算出静止的物体的坐标为[3,0,4],然后将该坐标在三维坐标体系中标记出 来。依此方法,多次计算出图片中静止的物体相对原点的位置,经过多次计算后,该三维坐标体系中就有很多个点,即可以建立出整个场景地图。
综上所述,本发明的视觉扫地机器人建立场景地图,将清扫环境中运动的物体不建立在场景地图中,使建立的场景地图更加准确,使用户在使用时避免将运动的家人或宠物建立在场景地图中,为后续的规划清扫路径提供更高效准确的路径规则。
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (14)

  1. 一种视觉扫地机器人建立场景地图的方法,其特征在于,包括步骤:
    采集图片;
    判定所述图片中静止的物体所对应的区域,标记为有效区域;
    根据所述图片中有效区域建立场景地图。
  2. 如权利要求1所述的视觉扫地机器人建立场景地图的方法,其特征在于,所述判定所述图片中静止的物体所对应的区域,标记为有效区域的步骤包括:
    采用光流法提取所述图片中的运动物体;
    将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
  3. 如权利要求1所述的视觉扫地机器人建立场景地图的方法,其特征在于,所述根据所述图片中有效区域建立场景地图的步骤包括:
    利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置;
    根据所述静止的物体在场景地图中的位置建立场景地图。
  4. 如权利要求3所述的视觉扫地机器人建立场景地图的方法,其特征在于,所述利用摄像头的内参结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置的步骤包括:
    获取摄像头在所述场景地图中的位置;
    根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
    根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
  5. 如权利要求4所述的视觉扫地机器人建立场景地图的方法,其特征在于,所述摄像头的内参包括所述摄像头的焦距和光圈中心。
  6. [根据细则26改正23.04.2018] 
    如权利要求5所述的视觉扫地机器人建立场景地图的方法,其特征在于,
    所述根据所述图片中有效区域建立场景地图的步骤包括:
    根据所述图片,建立三维坐标体系;
    获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
    所述计算出所述静止的物体相对所述摄像头的位置的步骤包括:
    根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
    Figure WO-DOC-FIGURE-1

    其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
  7. 如权利要求6所述的视觉扫地机器人建立场景地图的方法,其特征在于,所述根据预设公式,计算得出所述静止的物体在所述空间场景地图中位置的步骤包括:
    用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
  8. 一种视觉扫地机器人,其特征在于,包括:
    视觉系统,用于采集清扫过程中拍摄的图片;
    判定系统,用于判定所述图片中静止的物体所对应的区域,标记为有效区域;
    地图系统,根据判定系统标记的有效区域建立场景地图。
  9. 如权利要求8所述的视觉扫地机器人,其特征在于,所述判定系统包括:
    光流模块,用于采用光流法提取所述图片中的运动物体;
    忽略模块,用于将所述运动物体在所述图片对应的区域中忽略,得到图片中剩余的区域为静止的物体所对应的区域,形成所述有效区域。
  10. 如权利要求8所述的视觉扫地机器人,其特征在于,所述地图系统包括:
    内参模块,用于利用摄像头的内参,结合所述静止的物体在所述图片中的位置计算所述静止的物体在场景地图中的位置;
    建立模块,用于根据所述静止的物体在场景地图中的位置建立场景地图。
  11. 如权利要求10所述的视觉扫地机器人,其特征在于,所述内参模块包括:
    位置单元,用于获取摄像头在所述场景地图中的位置;
    第一计算单元,用于根据摄像头的内参,计算出所述静止的物体相对所述摄像头的位置;
    第二计算单元,用于根据预设公式,计算得出所述静止的物体在所述场景地图中的位置。
  12. 如权利要求11所述的视觉扫地机器人,其特征在于,所述摄像头的内参包括所述摄像头的焦距和光圈中心。
  13. [根据细则26改正23.04.2018] 
    如权利要求12所述的视觉扫地机器人,其特征在于,所述地图系统包括:
    坐标体系模块,用于根据所述图片,建立三维坐标体系;
    标记模块,用于获取所述静止的物体在所述三维坐标体系中的坐标,并在所述三维坐标体系中标记,形成所述场景地图;
    所述第一计算单元包括:
    公式子单元,用于根据指定公式计算出在三维坐标体系中,所述静止的物体相对所述摄像头的坐标[x,y,z],所述指定公式为:
    Figure WO-DOC-FIGURE-1

    其中,fx、fy指摄像头在x,y两个轴上的焦距,cx、cy指像头的光圈中心,[u,v,d]为图片中的像素坐标。
  14. 如权利要求13所述的视觉扫地机器人,其特征在于,所述第二计算单元包括:
    相加子单元,用于用所述静止的物体相对所述摄像头的坐标[x,y,z]加上所述摄像头在所述三维体系中的坐标,得到所述静止的物体在所述空间场景地图中位置。
PCT/CN2017/114077 2017-11-30 2017-11-30 视觉扫地机器人及建立场景地图的方法 WO2019104693A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/114077 WO2019104693A1 (zh) 2017-11-30 2017-11-30 视觉扫地机器人及建立场景地图的方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/114077 WO2019104693A1 (zh) 2017-11-30 2017-11-30 视觉扫地机器人及建立场景地图的方法

Publications (1)

Publication Number Publication Date
WO2019104693A1 true WO2019104693A1 (zh) 2019-06-06

Family

ID=66664300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/114077 WO2019104693A1 (zh) 2017-11-30 2017-11-30 视觉扫地机器人及建立场景地图的方法

Country Status (1)

Country Link
WO (1) WO2019104693A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2502540A2 (en) * 2009-11-16 2012-09-26 LG Electronics Inc. Robot cleaner and method for controlling same
CN105928505A (zh) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN106647742A (zh) * 2016-10-31 2017-05-10 纳恩博(北京)科技有限公司 移动路径规划方法及装置
WO2017091008A1 (ko) * 2015-11-26 2017-06-01 삼성전자주식회사 이동 로봇 및 그 제어 방법
CN108030452A (zh) * 2017-11-30 2018-05-15 深圳市沃特沃德股份有限公司 视觉扫地机器人及建立场景地图的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2502540A2 (en) * 2009-11-16 2012-09-26 LG Electronics Inc. Robot cleaner and method for controlling same
WO2017091008A1 (ko) * 2015-11-26 2017-06-01 삼성전자주식회사 이동 로봇 및 그 제어 방법
CN105928505A (zh) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 移动机器人的位姿确定方法和设备
CN106647742A (zh) * 2016-10-31 2017-05-10 纳恩博(北京)科技有限公司 移动路径规划方法及装置
CN108030452A (zh) * 2017-11-30 2018-05-15 深圳市沃特沃德股份有限公司 视觉扫地机器人及建立场景地图的方法

Similar Documents

Publication Publication Date Title
CN110599540B (zh) 多视点相机下的实时三维人体体型与姿态重建方法及装置
US9679385B2 (en) Three-dimensional measurement apparatus and robot system
US20200096317A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN107273846B (zh) 一种人体体型参数确定方法及装置
CN112785702A (zh) 一种基于2d激光雷达和双目相机紧耦合的slam方法
García-Moreno et al. LIDAR and panoramic camera extrinsic calibration approach using a pattern plane
WO2019075948A1 (zh) 移动机器人的位姿估计方法
CN108030452A (zh) 视觉扫地机器人及建立场景地图的方法
WO2019184083A1 (zh) 一种机器人调度方法
JP4132068B2 (ja) 画像処理装置及び三次元計測装置並びに画像処理装置用プログラム
JP4906683B2 (ja) カメラパラメータ推定装置およびカメラパラメータ推定プログラム
JP2022089269A (ja) キャリブレーション装置およびキャリブレーション方法
KR20070057613A (ko) 구 투영기법을 이용한 인체 관절의 3차원 위치 추정 방법
JP2016148649A (ja) 情報処理装置、情報処理装置の制御方法、およびプログラム
JP5698815B2 (ja) 情報処理装置、情報処理装置の制御方法及びプログラム
JP6066562B2 (ja) 計測装置、計測方法及びプログラム
JP6040264B2 (ja) 情報処理装置、情報処理装置の制御方法、およびプログラム
BenAbdelkader et al. Estimation of anthropomeasures from a single calibrated camera
WO2019104693A1 (zh) 视觉扫地机器人及建立场景地图的方法
JP4886661B2 (ja) カメラパラメータ推定装置およびカメラパラメータ推定プログラム
Nguyen et al. Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras
JP2005252482A (ja) 画像生成装置及び3次元距離情報取得装置
JP2012225888A (ja) 位置姿勢計測装置、位置姿勢計測方法
JP2022154076A (ja) 複数カメラ校正装置、方法およびプログラム
Li et al. Extrinsic calibration between a stereoscopic system and a LIDAR with sensor noise models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933379

Country of ref document: EP

Kind code of ref document: A1