CN112034837A - Method for determining working environment of mobile robot, control system and storage medium - Google Patents

Method for determining working environment of mobile robot, control system and storage medium Download PDF

Info

Publication number
CN112034837A
CN112034837A CN202010687725.8A CN202010687725A CN112034837A CN 112034837 A CN112034837 A CN 112034837A CN 202010687725 A CN202010687725 A CN 202010687725A CN 112034837 A CN112034837 A CN 112034837A
Authority
CN
China
Prior art keywords
image
ground
mobile robot
straight line
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010687725.8A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Original Assignee
Ankobot Shanghai Smart Technologies Co ltd
Shankou Shenzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankobot Shanghai Smart Technologies Co ltd, Shankou Shenzhen Intelligent Technology Co ltd filed Critical Ankobot Shanghai Smart Technologies Co ltd
Priority to CN202010687725.8A priority Critical patent/CN112034837A/en
Publication of CN112034837A publication Critical patent/CN112034837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors

Abstract

The application discloses a method for determining a working environment of a mobile robot, a control system and a storage medium, wherein the method comprises the following steps: acquiring an image shot by the image shooting device of the mobile robot in a working state and identifying a straight line segment in the image according to the acquired image; converting the identified straight line segments in the image into a ground coordinate system; and obtaining the ground texture in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system so as to determine the working environment of the mobile robot. The ground environment information of the mobile robot in the actual physical space can be determined according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system.

Description

Method for determining working environment of mobile robot, control system and storage medium
Technical Field
The application relates to the technical field of mobile robots, in particular to a method, a control system and a storage medium for determining a working environment of a mobile robot.
Background
The mobile robot is a machine device which automatically executes specific work, can receive human commands, can run a pre-arranged program, and can perform actions according to a principle formulated by an artificial intelligence technology. The mobile robot can be used indoors or outdoors, can be used for industry, business or families, and can have the functions of tour, welcome, ordering, floor cleaning, family accompanying, office assisting and the like.
Taking the cleaning robot as an example, due to the complexity of the working environment, the cleaning robot needs to accurately and effectively identify the working environment when moving in the working mode so as to realize the operations of high-efficiency cleaning, obstacle avoidance and the like. For example, the types and positions of various obstacles are recognized, and the main direction of a house is identified. However, the above-described processing methods are independent of each other, and the demands on the processing calculation capability of the cleaning robot and the calculation resources such as the sequence management are high.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present application aims to provide a method, a control system and a storage medium for determining a working environment of a mobile robot, so as to solve the problem in the prior art how to effectively integrate information resources in the working environment reflected by an image acquired by the mobile robot, so as to improve the overall computing processing performance of the mobile robot and perform effective identification.
To achieve the above and other related objects, a first aspect of the present application provides a method of determining a working environment of a mobile robot including an image pickup device, the method including the steps of: acquiring an image shot by the image shooting device of the mobile robot in a working state and identifying a straight line segment in the image according to the acquired image; the acquired image comprises an image of the ground in the actual physical space where the mobile robot is located; converting the identified straight line segments in the image into a ground coordinate system; and determining the ground environment information of the mobile robot in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
In certain embodiments of the first aspect of the present application, the step of identifying a straight line segment in the image from the acquired image comprises: the straight line segments in the image are identified from the acquired image by a straight line segment detection method.
In certain embodiments of the first aspect of the present application, the image is a color image, and the step of converting the identified straight line segments in the image into a ground coordinate system comprises: converting the identified straight line segment in one image into a ground coordinate system according to preset physical reference information; or converting the identified straight line segments in the multiple images into a ground coordinate system according to preset physical reference information and the poses of the mobile robot when different images are obtained.
In certain embodiments of the first aspect of the present application, the image is a depth image, and the step of transforming the identified straight line segments in the image into a ground coordinate system comprises: converting the identified straight line segments in one image into a ground coordinate system according to the pixel values of the identified straight line segments and the pixel positions of the pixel values in the image; or converting the identified straight line segments in the plurality of images into a ground coordinate system according to the pixel values of the identified straight line segments, the pixel positions of the pixel values in the corresponding images and the poses of the mobile robot when different images are obtained.
In certain embodiments of the first aspect of the present application, the ground grain screening conditions comprise at least one of: counting the screening conditions and presetting the screening conditions.
In certain embodiments of the first aspect of the present application, the statistical screening conditions comprise at least one of: the statistical screening conditions are set according to the statistical results of the arrangement rules of the single ground lines reflected by the straight line segments in the ground coordinate system; and the statistical screening conditions are set according to the statistical results which reflect the arrangement relation among the plurality of ground lines and are presented in the ground coordinate system by the adjacent straight line segments.
In certain embodiments of the first aspect of the present application, the preset screening conditions comprise at least one of: the method comprises the steps of setting a preset screening condition according to a preset arrangement relation among a plurality of ground lines; and the preset screening conditions are set according to the preset arrangement rule of the single ground texture.
In certain embodiments of the first aspect of the present application, the method of determining a mobile robot working environment further comprises: and determining a ground image describing the ground in the acquired image based on the shooting angle of the image shooting device, so that the mobile robot can identify a straight line segment in the ground image according to the ground image.
In certain embodiments of the first aspect of the present application, the image is a depth image, and the method of determining a working environment of the mobile robot further comprises: and clustering the three-dimensional point cloud data corresponding to the depth image to obtain three-dimensional point cloud data describing the ground, so that the mobile robot can identify the straight line segments in the ground image according to the ground image corresponding to the three-dimensional point cloud data describing the ground.
In certain embodiments of the first aspect of the present application, the step of determining the ground environment information in the actual physical space where the mobile robot is located according to the straight line segment that meets the ground texture screening condition in the ground coordinate system includes: determining a main direction in an actual physical space according to the linear segment meeting the ground texture screening condition so as to enable the mobile robot to move along the main direction; and/or removing the image corresponding to the straight line segment which accords with the ground texture screening condition from the acquired image to obtain a target image; identifying the type of the obstacle on the ground according to the target image; and the main direction is determined based on the direction angle of the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
In certain embodiments of the first aspect of the present application, the method of determining a mobile robot working environment further comprises: in case the main direction is determined, a navigation route is planned based on the current position of the mobile robot.
In certain embodiments of the first aspect of the present application, the step of identifying, from the target image, an obstacle type of an obstacle located on the ground comprises: identifying image characteristics of an obstacle in the target image, and determining the obstacle type of the obstacle according to the identified image characteristics of the obstacle; or performing obstacle recognition on the target image by using a classifier trained through machine learning to determine the obstacle type of the obstacle.
In certain embodiments of the first aspect of the present application, the type of obstacle comprises at least one of: type of winding, type of islanding, type of space division.
The second aspect of the present application also provides a control system of a mobile robot, the control system including: the linear segment identification module is used for acquiring an image shot by the image shooting device of the mobile robot in a working state and identifying a linear segment in the image according to the acquired image; the acquired image comprises an image of the ground in the actual physical space where the mobile robot is located; the conversion module is used for converting the identified straight line segments in the image into a ground coordinate system; and the ground environment information determining module is used for determining the ground environment information of the mobile robot in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
A third aspect of the present application also provides a control system of a mobile robot, the control system including: interface means for acquiring an image; a storage device for storing at least one program; and the processing device is connected with the interface device and the storage device and is used for calling and executing the at least one program so as to coordinate the interface device and the storage device to execute and realize the method for determining the working environment of the mobile robot in the first aspect of the application.
A fourth aspect of the present application also provides a mobile robot including: an image pickup device for picking up an image; the mobile device is used for controlled execution of mobile operation; a storage device for storing at least one program; and the processing device is connected with the mobile device, the storage device and the image shooting device and is used for calling and executing the at least one program so as to coordinate the mobile device, the storage device and the image shooting device to execute and realize the method for determining the working environment of the mobile robot in any one of the first aspect of the application.
In certain embodiments of the fourth aspect of the present application, the mobile robot is a cleaning robot.
A fifth aspect of the present application also provides a computer readable storage medium storing at least one program which, when invoked, executes and implements the method of determining a mobile robot working environment as described in any of the first aspects of the present application.
In summary, the method, the control system and the storage medium for determining the working environment of the mobile robot disclosed by the application convert the straight line segment in the image into the ground coordinate system, and obtain the ground texture in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system, so as to determine the working environment of the mobile robot. The ground lines in the working environment of the mobile robot are effectively detected, and the purpose that the mobile robot can efficiently and completely sense the working environment is achieved.
Other aspects and advantages of the present application will be readily apparent to those skilled in the art from the following detailed description. Only exemplary embodiments of the present application have been shown and described in the following detailed description. As those skilled in the art will recognize, the disclosure of the present application enables those skilled in the art to make changes to the specific embodiments disclosed without departing from the spirit and scope of the invention as it is directed to the present application. Accordingly, the descriptions in the drawings and the specification of the present application are illustrative only and not limiting.
Drawings
Fig. 1 is a schematic structural diagram of the ToF collecting part in the present application.
Fig. 2 is a schematic structural diagram of a motor with an integrated actuator and a movable member according to an embodiment of the present invention.
Fig. 3 is a schematic top view of the ToF collecting component disposed in the carrier in the embodiment of the present application.
Fig. 4 is a block diagram showing a hardware configuration of a control system of a mobile robot according to an embodiment of the present invention.
Fig. 5 shows a flow diagram of a method for determining a working environment of a mobile robot according to the present application in one embodiment.
Fig. 6 is a schematic diagram of a ground coordinate system according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating the principle of determining the relative spatial position between the suspected ground texture and the mobile robot according to the present application based on the imaging principle.
Fig. 8 is a schematic diagram of a control system of a mobile robot according to another embodiment of the present application.
Fig. 9 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Although the terms first, second, etc. may be used herein to describe various elements or parameters in some instances, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter. For example, a first route may be referred to as a second route, and similarly, a second route may be referred to as a first route, without departing from the scope of the various described embodiments. The first route and the second route are both describing one route, but they are not the same route unless the context clearly dictates otherwise.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The mobile robot may be used in industry, business, or home, and due to its complexity of its working environment, the mobile robot needs to detect its working environment in a working state to be able to better implement autonomous movement and autonomous work. The working environment is a physical environment within a working area (hereinafter referred to as an actual physical space in the embodiment) of the mobile robot, which includes at least one of the following objects: the physical space comprises barriers placed/hung in the actual physical space, room separators such as walls/screens and the like forming the boundary of the actual physical space, objects such as floors/floor tiles and the like laid in the actual physical space, and the like. Each of the objects described above affects the operation efficiency, the movement mode, and the like of the mobile robot in the actual physical space. For example, the ground texture (e.g., the gap between floors and walls, etc.) formed by objects such as paved floors/tiles reflects the structural characteristics of the room, such as the length and width directions. For example, a winding object such as a plant vine hanging down on the ground, a data line laid along the ground, or a skipping rope placed on the ground is likely to become a trip or the like that hinders the movement of the mobile robot. The working state of the mobile robot refers to various states that the mobile robot may exist in the case of being powered on, and includes but is not limited to: a moving state of the mobile robot, a state in which the mobile robot performs a specific work (e.g., a state in which the cleaning robot performs a cleaning work), a state in which the mobile robot stops moving (e.g., a state in which the mobile robot controls the rotation of the image pickup device at a fixed position in an actual physical space), and the like.
The mobile robot detects the surrounding environment by means of various sensing devices arranged in the mobile robot in a working state, and detected data comprise an object positioned above the mobile robot in an actual physical space, an object/cliff along the moving direction and the like. The detected data in different directions cannot be processed integrally, so that the intelligence of the mobile robot is reduced. Especially, the cleaning robot is mainly used for cleaning the ground, and is not beneficial to the mobile robot to execute different works under the condition that ground gaps are not effectively identified. For example, the mobile robot is a cleaning robot, and the cleaning robot cannot perform intensive cleaning on an area where a floor gap is located when the cleaning robot cannot sense the floor gap in an actual physical space. As another example, when the mobile robot recognizes the obstacle type of the obstacle located on the ground from the image due to interference of the ground gap, the ground gap having the long straight image feature may be erroneously recognized as the winding object located on the ground. For another example, the mobile robot cannot identify a ground gap, so that the mobile robot needs to search for a wall surface in the actual physical space for a long time to determine the main direction according to a direction perpendicular to or parallel to the wall surface. It should be noted that the mobile robot can use different sensing devices to perform separate detection to obtain corresponding data, and solve the above-mentioned drawbacks through separate processing, which is not favorable for improving the overall performance of the mobile robot for identifying multiple operations.
Therefore, in order to improve the overall performance of the mobile robot in identifying the multiple operations, the application provides a method for determining the working environment of the mobile robot, and the method can identify the ground texture in the actual physical space of the mobile robot according to the image acquired by the image pickup device. Further, the mobile robot can better determine the main direction or the type of the obstacle according to the recognized ground texture; for example, in the process of obstacle identification, the interference of the ground texture is removed from the acquired image, so that the obstacle on the ground and the corresponding obstacle type can be identified more accurately; for another example, taking the mobile robot as an example of a cleaning robot, the cleaning robot determines a main direction according to a direction of a ground texture in the working environment in a working state, so as to improve a coverage rate of the cleaning robot for performing cleaning work.
In order to enable the mobile robot to recognize the ground texture in the actual physical space of the mobile robot through the image acquired by the image pickup device, the image acquired by the image pickup device should include the image of the ground in the actual physical space of the mobile robot. Wherein the ground includes but is not limited to the following categories: the floor surface for laying the composite floor, the floor surface for laying the solid wood floor, the floor surface for laying the carpet, the floor surface for laying the floor tile and the like. The ground texture refers to a regularly arranged texture that is presented on the whole ground surface, and includes, but is not limited to, regularly arranged ground gaps presented on the ground surface by floor/tile/carpet splicing (for example, gaps spliced between tiles, splicing gaps between floors, splicing gaps between carpets, etc.).
The image shooting device is used for providing a color image or a depth image containing the ground in the actual physical space where the mobile robot is located according to the preset pixel resolution. Wherein, each depth image represents the image of the object in the range of the captured field of view by using the pixel value (depth value) obtained by the preset pixel position, and the depth data of each pixel in each depth image comprises: the pixel position of each pixel in the depth image and the pixel value (depth value) corresponding to each pixel position. The color data acquired by each color image at the preset pixel position is used for representing the image of the object in the range of the shot field of view, wherein the color data of each pixel in each color image comprises: the pixel position of each pixel in the color image and the pixel value corresponding to each pixel position, wherein the pixel value in the color image is a pixel value of a single color or a pixel value of a color; the pixel value is, for example, a gray pixel value, an RGB pixel value, an R pixel value, a G pixel value, or a B pixel value; the color image is exemplified by an R color image, a G color image, a B color image, an RGB image, a grayscale image, or the like.
Here, the image pickup apparatus includes, but is not limited to: an image pickup device including a CCD, an image pickup device including a CMOS, an image pickup device including a depth measuring unit, an image pickup device (such as a ToF collecting section) integrated with a depth measuring unit and an infrared sensor, and the like; the depth measurement unit includes, but is not limited to: the depth measuring system comprises a laser radar depth measuring unit, a depth measuring unit based on flight time, a depth measuring unit based on binocular stereo vision, a depth measuring unit based on structured light technology and the like. For example, the depth measurement unit includes a light emitter and a light receiving array, wherein the light emitter projects a specific light signal to the surface of the object and reflects the light signal to the light receiving array, and the light receiving array calculates the depth value according to a change in the light signal caused by the surface of the object.
In order to enable the image acquired by the image pickup device to include an image of the ground, the image pickup device may be mounted on the mobile robot at a preset mounting inclination angle, which may be any angle from 0 ° to 90 °, wherein the mounting inclination angle is an included angle between a horizontal line along a traveling direction of the mobile robot and an optical axis or a main optical axis of the image pickup device, and specifically, the mounting inclination angle of 0 ° means that the optical axis or the main optical axis of the image pickup device and the horizontal line along the traveling direction of the mobile robot are parallel to each other; the assembly inclination angle of 90 ° means that the optical axis or the main optical axis of the image pickup device is perpendicular to a horizontal line in the traveling direction of the mobile robot, that is, the optical axis or the main optical axis is vertically downward, or the optical axis or the main optical axis is vertically upward. It should be noted that, when the optical axis or the main optical axis is vertically upward, or forms a preset angle with the direction, the mobile robot needs to control the image capturing device to rotate a deflection angle, so as to obtain an image including the ground. The preset angle is determined according to a viewing angle of the image pickup apparatus in a vertical direction.
In an embodiment, the image capturing device is mounted at a position of a front end surface in a traveling direction of the mobile robot, and an optical axis of the image capturing device is parallel to a traveling plane, so that the included angle is 0 °.
In yet another embodiment, the image capturing device is mounted on the upper surface (i.e. the surface parallel to the traveling direction) of the mobile robot, but the image capturing device is obliquely placed in a concave structure, and the optical axis of the image capturing device forms an angle in the range of 10 ° to 80 ° with the horizontal line of the traveling direction.
In still another embodiment, the acquired image is captured by the image capturing device (e.g., ToF capturing component) during rotation, and by rotating the image capturing device, the image capturing device can capture an image of a wider field of view. For example, the acquired image is an image captured by the image capturing apparatus during rotation. For another example, the obtained images are formed by splicing at least two images shot by the image shooting device in the rotation process, and the mobile robot can determine the type of the ground texture or the obstacle more easily by splicing the obtained images which contain more obstacles or more grounds in the actual physical space where the mobile robot is located.
In an embodiment, the mobile robot may identify a straight line segment from the acquired image, and further determine a relative spatial position of an outline (hereinafter, referred to as a suspected ground texture) of an object corresponding to the straight line segment from the mobile robot, based on which the mobile robot may acquire a comprehensive image including the ground by controlling the rotation of the image capturing device.
In another embodiment, when the mobile robot detects that an obstacle exists in the physical space where the mobile robot is located by using other sensing devices, the mobile robot further controls the image capturing device to rotate based on the approximate direction of the obstacle, so that a comprehensive image containing the obstacle is obtained. The method for determining the existence of the obstacle in the physical space of the mobile robot includes but is not limited to: the manner of sensor detection, or the manner of image recognition.
When the image pickup device is not rotated, the imaging angle of the image pickup device is the mounting inclination angle of the image pickup device, and when the image pickup device is rotated, the imaging angle of the image pickup device is the mounting inclination angle of the image pickup device plus the deflection angle of the image pickup device. The shooting angle refers to an included angle of an optical axis of the image shooting device relative to a horizontal line of the moving direction of the mobile robot when the image shooting device shoots an image.
The embodiment of determining the relative spatial position between the suspected ground texture and the mobile robot is the same as or similar to the embodiment of determining the relative spatial position between the obstacle and the mobile robot, and the detailed steps will be described in detail in the subsequent step S120.
It should be noted that the comprehensive image is not limited to the captured image including the closed outline of the obstacle or the captured image being the ground, but rather the comprehensive information of the identified obstacle or the ground is obtained as much as possible under the preset rotation limit. The image pickup device may be in a state of being kept stationary before being rotated or in a state of being continuously rotated, thereby acquiring at least one image of a wider field of view.
Referring to fig. 1, fig. 1 is a schematic structural diagram illustrating a ToF collecting component in the present application, where the mobile robot is connected to a driving component and controls the driving component 202 to drive the ToF collecting component 201 to rotate so as to obtain the image. Referring also to fig. 2, the driving unit 202 includes: a movable member 2021 and a driver 2022.
Specifically, the movable element 2021 is connected to and movable to drive the ToF collecting component 201. The ToF collecting component 201 and the movable component 2021 can be connected in a positioning manner or through a transmission structure. Wherein the positioning connection comprises: any one or more of snap connection, riveting, bonding, and welding. In an example of positioning connection, such as shown in fig. 2, the movable element 2021 is, for example, a driving rod capable of rotating laterally, and the ToF collecting component 201 has a concave hole (not shown) that is fitted with the driving rod in a form-fitting manner, so long as the sections of the driving rod and the concave hole are non-circular, the ToF collecting component 201 can rotate laterally with the driving rod; in some examples of the transmission structure, the movable member is, for example, a screw rod, a connection seat on the screw rod is translated along with the rotation of the screw rod, and the connection seat is fixed with the ToF collecting part 201, so that the ToF collecting part 201 can move along with the connection seat. In some examples of the transmission structure, the ToF collecting component 201 and the movable component may also be connected through one or more of a tooth portion, a gear, a rack, a toothed chain, etc. to realize the movement of the movable component to the ToF collecting component 201.
Illustratively, the actuator 2022 and the movable element 2021 may be integral. For example, as shown in fig. 2, the driving component 202 itself may be a motor, and the movable component 2021 may be an output shaft of the motor, which rotates transversely to drive the ToF collecting component 201 sleeved with the output shaft to rotate transversely.
Referring to fig. 3, fig. 3 is a schematic top view of a ToF collecting component disposed in a carrier according to an embodiment of the present disclosure. When the ToF collecting part 201 is mounted on the main body of the mobile robot through the carrier 102, the mobile robot controls the driving part 202 connected thereto to drive the ToF collecting part 201 to rotate, so as to obtain the image required in step S110.
The mobile robot performs the control method by means of a control system disposed therein. Referring to fig. 4, fig. 4 is a block diagram of a hardware structure of a control system of a mobile robot according to an embodiment of the present disclosure. The control system 10 comprises storage means 11, interface means 12, and processing means 13.
The interface device 12 is used to obtain images taken from the image taking device. According to the image shooting device configured by the actual mobile robot, the interface device 12 is connected with at least one image shooting device and is used for reading the image shot by the corresponding image shooting device and containing the object in the visual field range of the corresponding image shooting device. The interface device 12 is further configured to output a control command for controlling the mobile robot, for example, the interface device 12 is connected to a driving motor of the traveling mechanism, a side brush motor for driving the side brush, and/or a rolling brush motor for driving the rolling brush, and the like, to output the control command for controlling the traveling mechanism, the side brush, and/or the rolling brush to rotate. The control instructions are generated by the processing means 13 based on the determined operating environment in combination with the control strategy in the storage means 11. For example, when the floor pattern is determined at least from the acquired image, the processing device 13 generates, in combination with the control strategy, a control instruction for increasing the number of times of cleaning the floor pattern, and outputs the control instruction to the drive motor for driving the edge brush and/or the roll brush via the interface device 12. The interface device 12 includes but is not limited to: a serial interface such as an HDMI interface or a USB interface, or a parallel interface, etc.
The storage device 11 is used for storing at least one program, and the at least one program can be used for the processing device 13 to execute the method for determining the working environment of the mobile robot. The storage device 11 further stores a control policy corresponding to the working environment, wherein the control policy is control logic described by a program for the processing device 13 to execute, and the control policy is used for generating a control instruction for controlling the mobile robot based on the recognition condition of the mobile robot on the working environment to output through the interface device 12. In practical applications, the control strategy includes a movement control strategy or a behavior control strategy, where the movement control strategy is used to generate a control instruction for controlling the movement of the mobile robot based on a relative spatial position between the mobile robot located in real time and the obstacle of a certain type determined by the processing device 13, or is used to generate a control instruction for controlling the movement of the mobile robot based on whether a real-time moving direction of the mobile robot is perpendicular to or parallel to a direction of the ground texture, for example, the mobile robot is a cleaning robot, and in a case that the processing device determines the ground texture in an actual physical space, the processing device 13 generates a control instruction for turning the moving direction to a direction perpendicular to or parallel to the direction of the ground texture based on the control strategy in the storage device; the behavior control strategy is used for generating a control instruction for controlling the working mode of the mobile robot based on the determined type of the obstacle or the ground texture, for example, the mobile robot is a cleaning robot, and when the determined type of the obstacle is a winding type, the processing device 13 generates a control instruction for controlling the mobile robot to turn off the cleaning device based on the control strategy in the storage device.
Here, the storage device 11 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, the storage 11 includes a flash memory device or other non-volatile solid-state storage device. In certain embodiments, the storage 11 may also include memory remote from the one or more processing devices 13, such as network-attached memory accessed via RF circuitry or external ports and a communications network, which may be the internet, one or more intranets, local area networks, wide area networks, storage area networks, or the like, or suitable combinations thereof. The memory controller may control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
The processing means 13 are connected to said interface means 12 and to the storage means 11. The processing means 13 comprise one or more processors. The processing means 13 is operable to perform data read and write operations with the storage means 11. The processing means 13 performs processing such as recognizing straight line segments in the image, converting the recognized straight line segments in the image to a ground coordinate system, and the like. The processing device 13 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof. The processing device 13 is also operatively coupled with I/O ports that enable the mobile robot to interact with various other electronic devices, and input structures that enable a user to interact with the mobile robot. For example, the user inputs the interval reflecting the adjacent ground texture through the input structure, inputs the angular relationship reflecting the ground texture, and the like. Thus, the input structures may include buttons, keyboards, mice, touch pads, and the like. Such other electronic devices include, but are not limited to: the slave processor, such as a Micro Controller Unit (MCU), dedicated to control the mobile device and the cleaning device in the mobile robot.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for determining a working environment of a mobile robot according to an embodiment of the present disclosure. Wherein the control method may be performed by the control system of the mobile robot shown in fig. 4. Wherein the processing device coordinates hardware such as the storage device and the interface device to execute the following steps.
In step S110, the processing device acquires an image captured by the image capturing device in an operating state of the mobile robot and identifies a straight line segment in the image according to the acquired image.
The image acquired by the processing device may be an image directly captured by the image capturing device, or may be formed by splicing at least two images captured by the image capturing device in a rotation process. The images captured by the image capturing device are depth images and/or color images, and the image capturing device for capturing different types of images and the assembling manner of the image capturing device are the same as or similar to those described above, and are not described in detail herein.
The straight line segment refers to a characteristic point connecting line with the curvature and/or the length meeting preset conditions. Specifically, when the curvature of the connection line of the feature points identified in the image is less than or equal to a preset curvature threshold value, and/or the length of the connection line of the feature points is greater than a preset length threshold value, it can be determined that the connection line of the feature points is a straight line segment. Wherein the curvature refers to a bending degree of a connecting line of the characteristic points, for example, the preset curvature threshold value is any value between 0 and 5 °.
In one embodiment, the processing device identifies a straight line segment in the image from the acquired image using a straight line segment detection method. The straight line segment detection method includes but is not limited to: a Radon line detection method, a Hough line detection method, a Freeman line detection algorithm, an lsd (line Segment detector) line detection algorithm, an inchworm crawling algorithm, and the like.
In a specific embodiment, the processing device first performs edge detection on the acquired image to extract an edge region in the image, that is, a region with a higher gradient in the image, and then the processing device detects the edge region by using a straight line segment detection method to identify a straight line segment in the image from the edge region. The edge region may correspond to the contour of an object in the actual physical space, such as a ground gap, an area where an object on the ground intersects the ground, an area where a wall surface intersects the ground, and so on.
In one example, the acquired image is a depth image, and the method for performing edge detection on the acquired image includes, but is not limited to: a scanning line iteration edge detection method, a bidirectional curvature edge detection method and an edge detection method based on differential operators.
In another example, when the acquired image is a single color image such as an R color image, a G color image, a B color image, or a grayscale image, the method of performing edge detection on the acquired single color image includes, but is not limited to: an edge detection method based on a differential operator, an edge detection method based on self-adaptive smooth filtering, a relaxation iteration edge detection method, an edge detection method based on a neural network, an edge detection method based on a wavelet, an edge detection method based on a gray correlation degree and the like. The acquired color image is an RGB image, and the processing device may convert the RGB image into a grayscale image and then perform edge detection, or may decompose the RGB image into an R color image, a G color image, and a B color image, perform edge detection on the color image of each color component, and then perform comprehensive processing on the detection result of each color image to determine an edge region in the image.
It should be noted that the processing device may also use a neural network algorithm to identify the straight line segments in the image, and the manner of identifying the straight line segments is not limited herein.
In some embodiments, in order to improve the accuracy of determining the ground texture and reduce the complexity of subsequent processing, the processing device further determines that the acquired image includes a ground image describing the ground, and then identifies a straight line segment in the ground image according to the ground image.
In one example, the processing device determines a ground image describing the ground in the acquired image based on the shooting angle of the image shooting device, so that the mobile robot can recognize a straight line segment in the ground image according to the ground image. The shooting angle is an included angle of an optical axis of the image shooting device relative to a horizontal line of the moving direction of the mobile robot when the image shooting device shoots an image. For example, if an angle of an optical axis of an image pickup device of a mobile robot with respect to the horizontal line is θ, the mobile robot photographs the ground image in a plan view at the angle, and if an image region corresponding to the angle θ (for example, a lower half image region in one image, or an image region in which the lower half portion of one image is one tenth upward from the lower edge of the image) is stored in advance in the storage device, the partial image region is the ground image, and the processing device can recognize the straight line segment in the ground image according to any of the above-described embodiments of recognizing the straight line segment in the image.
In another example, the processing device clusters three-dimensional point cloud data corresponding to the depth image to obtain three-dimensional point cloud data describing the ground, so that the mobile robot can use the three-dimensional point cloud data to describe the groundAnd identifying a straight line segment in the ground image according to the ground image corresponding to the three-dimensional point cloud data of the ground. For example, the coordinate origin corresponding to the three-dimensional point cloud data is an optical center position of the image capturing device, the image capturing device is a ToF collecting component, and the processing device converts the depth image into the three-dimensional point cloud data according to the depth image measured by the ToF collecting component and internal parameters of the ToF collecting component. Specifically, the internal parameters of the ToF collecting component include: the ToF acquisition component respectively has focal lengths fx and fy on an x axis and a y axis, and offsets cx and cy of an optical axis of the ToF acquisition component in an image coordinate system. If the pixel value (depth value) in the depth image is d and the pixel position of the depth value is (u, v), the three-dimensional point cloud data of the pixel position (u, v) after coordinate transformation corresponds to (X, Y, Z), where Z is d;
Figure BDA0002588193840000121
then, the processing device clusters the obtained three-dimensional point cloud data to obtain three-dimensional point cloud data describing the ground, and the processing device can determine a ground image describing the ground in the image according to the three-dimensional point cloud data describing the ground; the processing device may identify the straight line segment in the ground image according to any of the above embodiments for identifying the straight line segment in the image.
It should be noted that, the straight line segments in the whole image may be identified first, and then the ground image is obtained by using the above embodiment, so as to obtain the straight line segments corresponding to the ground image; and when the acquired image is a depth image, determining the height of each straight line segment from the fitting ground by using point cloud data corresponding to the depth image so as to obtain the straight line segment corresponding to the ground image. And the fitting ground is obtained by fitting according to the three-dimensional point cloud data corresponding to the ground.
The processing device executes step S120 according to the identified straight line segment in the image to determine at least a true angular relationship between the suspected ground lines in the actual physical space, where the suspected ground lines are the contours of objects in the actual physical space corresponding to the straight line segment.
In step S120, the processing device converts the identified straight line segments in the image into a ground coordinate system.
Specifically, the processing device can determine the pixel position of the straight line segment in the image coordinate system according to the identified straight line segment in the image. Determining the relative spatial position of each suspected ground texture and the mobile robot by utilizing the mapping relation between the straight line segment in the image coordinate system constructed based on the measurement data acquired by the processing device and each corresponding suspected ground texture, and converting the identified straight line segment in the image coordinate system into a ground coordinate system by the processing device at least according to the relative spatial position. The relative spatial positions include: and the physical distance and the azimuth angle between each point on the suspected ground texture and the mobile robot. The image coordinate system is a coordinate system which is established by taking any pixel point (such as a pixel point at the upper left corner in an image) in the image as an original point and takes a pixel as a unit, and the abscissa and the ordinate of a certain pixel respectively represent the number of columns and the number of rows of the certain pixel in an image array. The ground coordinate system is a plane coordinate system parallel to the ground and constructed by taking any point in an actual physical space as an origin. The ground coordinate system takes physical length as unit length. The position of the identified straight line segment in the ground coordinate system may reflect a real angular relationship between the suspected ground textures in the actual physical space, for example, two suspected ground textures that are perpendicular to each other in the actual physical space, and the corresponding straight line segments of the two suspected ground textures are also perpendicular to each other in the ground coordinate system. The positive direction of the ordinate axis of the ground coordinate system may be the forward direction of the mobile robot, and the positive direction of the abscissa axis of the ground coordinate system may be the left side of the mobile robot, but the invention is not limited thereto. For example, the physical distance between any point on the suspected ground texture and the mobile robot may be represented by the ordinate of the ground coordinate system, and the tangent value of the orientation angle between any point on the suspected ground texture and the mobile robot may be represented by the ratio of the ordinate and the abscissa of the point in the ground coordinate system.
In an embodiment, the acquired image is a color image, the processing device needs to obtain a relative spatial position between each suspected ground texture and the mobile robot according to preset physical reference information, and convert the identified straight line segment located in the image coordinate system into a ground coordinate system at least according to the relative spatial position. The physical reference information includes but is not limited to: the device comprises a physical height of the image pickup device from the ground, physical parameters of the image pickup device and an included angle of an optical axis of the image pickup device relative to a horizontal plane or a vertical plane. Here, the technician obtains the physical reference information by previously calibrating the image pickup apparatus, for example, before the mobile robot leaves the factory, the technician previously measures the distance between the imaging center of the image pickup apparatus and the ground, and saves the distance as the physical height or an initial value of the physical height in the storage device. The physical height may also be obtained by calculating the design parameters of the mobile robot in advance. The included angle of the optical axis of the image pickup device relative to the horizontal plane or the vertical plane or the initial value of the included angle can be obtained according to the design parameters of the mobile robot. For the mobile robot with the adjustable image pickup device, the saved included angle may be determined after increasing/decreasing the adjusted deflection angle based on the initial value of the included angle, and the saved physical height may be determined after increasing/decreasing the adjusted height based on the initial value of the physical height. The physical parameters of the image capturing device include the angle of view and/or the focal length of the lens group, and the like.
In one embodiment, the processing device converts the identified straight line segments in one image into a ground coordinate system according to at least preset physical reference information and an imaging principle. For example, the origin of the ground coordinate system changes with the movement of the mobile robot (for example, the origin is determined based on the current position of the mobile robot), and the processing device determines the relative spatial position between the suspected ground texture and the mobile robot according to the preset physical reference information and the imaging principle, so as to directly convert the identified straight line segment in one image into the ground coordinate system. For another example, the origin of the ground coordinate system is a fixed point in the actual physical space, and the processing device converts the straight line segment in the image into the ground coordinate system based on the relative spatial position and the pose when the image is acquired. The pose includes: a position and/or a pose of the mobile robot, the pose including but not limited to a heading angle of the mobile robot, the position of the mobile robot representing a position of the mobile robot within the actual physical space.
Taking an origin of the ground coordinate system as an example determined based on a current position of the mobile robot, where a forward direction of an ordinate axis of the ground coordinate system is a current forward direction of the mobile robot, and a forward direction of an abscissa axis of the ground coordinate system is a left side of the mobile robot, please refer to fig. 6, where fig. 6 shows a schematic diagram of the ground coordinate system in an embodiment of the present application, where the ground coordinate system takes a bottom center of the mobile robot as the origin, and an abscissa value of a straight line segment in the ground coordinate system shown in fig. 6 is a positive ordinate value, which indicates that a point on the straight line segment is in a front right direction of the mobile robot. Referring to fig. 7, fig. 7 is a schematic diagram illustrating a principle of determining a relative spatial position between a suspected ground texture and the mobile robot according to the present application based on an imaging principle, where the diagram includes three coordinate systems: image coordinate system UO1V, with O3World coordinate system with origin, O2Assuming that the suspected ground texture in the actual physical space includes a point P, the height of the object from the travel plane of the image capturing device is known as H (for example, the height of the object H may be obtained from the Z-axis coordinate of the image capturing device in the world coordinate system), the world coordinate point M corresponding to the image coordinate center, and the origin O of the world coordinate system3A distance O of3M, image coordinate O of lens central point1Measuring image coordinate P of pixel point1The length and width of the actual pixel, and the focal length of the image capturing device, O can be obtained by derivation calculation3Length of P, thereby, according to the O3The length of P may beAnd obtaining the physical distance between the current position of the mobile robot and the point P. In order to determine the azimuth angle between the point P and the position of the mobile robot, the processing device calculates the azimuth angle between the mobile robot and the point P according to the corresponding relation between each pixel point of the image and the actual physical azimuth angle which are stored in the storage device in advance. Each pixel point corresponds to an azimuth angle, and the azimuth angle can be calculated based on parameters such as the number of pixels, the focal length of the image pickup device, the angle of view and the like. The processing means converts the currently determined relative spatial position into horizontal and vertical coordinates in the ground coordinate system. For example, the physical distance between any point on the pseudo-ground texture and the mobile robot is a vertical coordinate value of the point in the ground coordinate system, and the tangent value of the azimuth angle between any point on the pseudo-ground texture and the mobile robot is a ratio of the vertical coordinate value of the point in the ground coordinate system to the horizontal coordinate value of the point in the ground coordinate system. It should be noted that the plane XO perpendicular to the Z axis in the world coordinate system can be directly used as the basis3Y establishes the ground coordinate system described previously.
In another embodiment, the processing device converts the identified straight line segments in the multiple images into a ground coordinate system according to preset physical reference information and the poses of the mobile robot when different images are acquired. Specifically, for a straight line segment in each image, the processing device first calculates a relative spatial position between the corresponding suspected ground grain and a position where the mobile robot acquires the image, and then performs rotation transformation on the straight line segment according to a course angle of the mobile robot when acquiring different images and a direction of a horizontal coordinate axis and a vertical coordinate axis of the ground coordinate system, so as to convert the straight line segments in different images into a ground coordinate system, where an angular relationship between the straight line segments in the ground coordinate system is the same as an angular relationship between the corresponding suspected ground grains in an actual physical space, for example, two suspected ground grains perpendicular to each other in the actual physical space, and an angular relationship between the corresponding straight line segments in the ground coordinate system are also perpendicular. Based on this, the processing device can also obtain the relative spatial position between the straight line segments in different images and the origin of the ground coordinate system according to the course angle and the position of the mobile robot when obtaining different images, the position corresponding to the origin of the ground coordinate system, and the directions of the horizontal and vertical coordinate axes of the ground coordinate system, and further convert the straight line segments in different images into a ground coordinate system, wherein in the ground coordinate system, not only the angular relationship between the straight line segments is the same as the angular relationship between the corresponding suspected ground textures in the actual physical space, but also the distance relationship between the straight line segments is the same as the distance relationship between the corresponding suspected ground textures in the actual physical space, for example, two suspected ground textures which are parallel to each other and have a distance of 0.5m in the actual physical space, and the angular relationship of the corresponding straight line segments in the ground coordinate system is also parallel, and the distance relationship obtained from the coordinates of the respective straight line segments is also 0.5m apart.
Taking the original point of the ground coordinate system as the position of the mobile robot when the mobile robot is in the starting working state as an example, the processing device establishes the ground coordinate system according to the original point and the course angle of the mobile robot at the position, and determines the pose of the mobile robot in the ground coordinate system relative to the original point during the movement. For example, the traveling direction of the mobile robot from the origin is a positive ordinate axis of the ground coordinate system, and conversely, the traveling direction is a negative ordinate axis; the clockwise rotation of the mobile robot along the positive ordinate axis by 90 degrees corresponds to the positive abscissa axis of the ground coordinate system, and conversely, the clockwise rotation corresponds to the negative abscissa axis, and the origin of the ground coordinate system and the positive and negative of the abscissa and ordinate axes are not limited to this.
In one example, the position of the mobile robot when the mobile robot starts the working state is A, the course angle is B, the position of the mobile robot when the first image is acquired is C, the course angle is D, the position of the mobile robot when the second image is acquired is E, the course angle is F, the processing device can obtain the relative spatial position between the suspected ground texture corresponding to the straight line segment in the first image and the position C according to the preset physical reference information, the processing means may then determine the coordinates of the straight line segments in the first image in a ground coordinate system with the position C as the origin of coordinates, the processing means based on the difference between the heading angle B and the heading angle D, the straight line segments in the first image may be transformed into a ground coordinate system with the origin at position a, which, similarly, the straight line segments in the second image may be transformed into a ground coordinate system with the origin at position a. For example, the difference between the heading angle B and the heading angle D is 90 degrees, and the coordinates of the straight line segment G in the ground coordinate system with the position a as the origin may be obtained by directly exchanging the abscissa and the ordinate of the straight line segment G in the ground coordinate system with the position C as the origin. Although the distance relationship between the straight line segments of each image in the ground coordinate system with the position a as the origin is not necessarily the same as the distance relationship between the corresponding pseudo ground grain in the actual physical space, the angular relationship between the straight line segments is the same as the angular relationship between the corresponding pseudo ground grain in the actual physical space.
In another example, the position where the mobile robot is located when the mobile robot is in the working state is a, the heading angle is B, the position where the first image is obtained is C, the heading angle is D, the position where the second image is obtained is E, and the heading angle is F, the processing device can obtain the relative spatial position between the linear segment and the position a according to the heading angle D and the position C of the mobile robot when the first image is obtained, and the position a and the heading angle B, and further can convert the linear segment in the first image into the ground coordinate system, and similarly, can convert the linear segment in the second image into the ground coordinate system. In the ground coordinate system, the angular relationship between the straight line segments is the same as the angular relationship between the corresponding suspected ground lines in the actual physical space, and the distance relationship between the straight line segments is the same as the distance relationship between the corresponding suspected ground lines in the actual physical space.
In another embodiment, the acquired image is a depth image, the processing device may obtain a physical distance and an orientation angle between the suspected ground texture corresponding to the straight line segment and the mobile robot, that is, a relative spatial position between the suspected ground texture and a position where the mobile robot acquires the image, according to the pixel value of the identified straight line segment and the pixel position of each pixel value in the image, and the processing device may convert the identified straight line segment located in the image coordinate system into a ground coordinate system at least according to the relative spatial position.
In one embodiment, the processing means converts the identified straight line segments in one image into a ground coordinate system based on at least the pixel values of the identified straight line segments and the pixel positions of the pixel values in the corresponding image.
In another embodiment, the processing device converts the identified straight line segments in the multiple images into a ground coordinate system according to the pixel values of the identified straight line segments and the pixel positions of the pixel values in the corresponding images and the poses of the mobile robot when acquiring different images.
The embodiments of converting the straight line segments in one image into a ground coordinate system and converting the straight line segments in the multiple images into a ground coordinate system are the same as or similar to those described above, and will not be described in detail herein.
Based on the direction angle of each straight line segment and/or the interval between adjacent straight line segments in the ground coordinate system, the processing device executes step S130 to determine the ground environment information in the actual physical space where the mobile robot is located.
In step S130, the processing device determines the ground environment information in the actual physical space where the mobile robot is located according to the straight line segment that meets the ground texture screening condition in the ground coordinate system.
Specifically, the processing device may determine, according to the ground texture screening condition, and a direction angle of each linear segment and/or an interval between adjacent linear segments in the ground coordinate system, a linear segment in the ground coordinate system that satisfies the ground texture screening condition, and further determine, by using the screened linear segment, ground environment information related to the ground in an actual physical space where the mobile robot is located. The direction angle represents an included angle between a straight line segment and a certain coordinate axis in the ground coordinate system, for example, for each straight line segment in the ground coordinate system UOV, an included angle between each straight line segment and an abscissa axis U or an ordinate axis V in the ground coordinate system UOV may be obtained, and the processing device may use an included angle between a straight line segment and any one coordinate axis as a direction angle of the straight line segment in the ground coordinate system. Wherein the included angle is in the range of 0 ° to 180 °. The interval between the adjacent straight line segments refers to an interval between the adjacent straight line segments having a parallel relationship with each other in the ground coordinate system, for example, for each straight line segment in the ground coordinate system UOV, two straight line segments parallel to the abscissa axis U, a vertical coordinate of each point on one straight line segment in the ground coordinate system UOV is 3m, a vertical coordinate of each point on the other straight line segment is 3.5m, and then an interval between the two straight line segments is 0.5 m. The ground environment information includes at least one of: the mobile robot identifies position information reflecting ground grains relative to the mobile robot, data reflecting the arrangement rule of a single ground grain, data reflecting the arrangement relation among a plurality of ground grains, interval data reflecting intervals among the ground grains, main direction information determined according to the ground grains, and azimuth angles, physical distances, even types of obstacles and the like of obstacles different from the ground grains on the ground relative to the mobile robot.
Under the condition that the processing device carries out statistics on the direction angle, the interval of the adjacent straight line segments and the like, the ground texture screening condition can be a statistical screening condition and a preset screening condition, a statistical screening condition or a preset screening condition. And under the condition that the processing device does not perform statistics, the ground texture screening condition is a preset screening condition.
In an example, the statistical screening condition is a statistical screening condition set according to a statistical result of an arrangement rule reflecting a single ground texture presented by a straight line segment in the ground coordinate system, where the statistical result is, for example, a statistical number. Examples of the arrangement rule reflecting the single ground texture include a direction angle of the straight line segment in the ground coordinate system, and/or a length value of the straight line segment. For example, the processing device may count direction angles of the straight line segments in the ground coordinate system, may obtain straight line segments belonging to each direction angle, and may further obtain a statistical number of each direction angle, and the statistical screening condition may be the straight line segment corresponding to the direction angle with the largest statistical number. For example, if there are 2 straight line segments with a direction angle of 30 degrees and 11 straight line segments with a direction angle of 90 degrees, the suspected ground texture corresponding to the 11 straight line segments is the ground texture in the actual physical space.
In another example, the statistical filtering condition is a statistical filtering condition set according to a statistical result reflecting an arrangement relationship between a plurality of ground lines, which is presented in the ground coordinate system by adjacent straight line segments. Examples of reflecting the arrangement relationship among the plurality of ground lines include intervals between adjacent straight line segments and intersection angles between intersecting straight line segments. The statistical result is, for example, the statistical number of intervals between adjacent straight line segments and the statistical number of intersection angles between intersecting straight line segments. For example, if the processing device determines the intervals between adjacent straight line segments with the same direction angle and further obtains the statistical number of each interval, the statistical screening condition is the straight line segment corresponding to the interval with the largest statistical number. For example, if there are 2 statistical intervals having an interval of 2m between adjacent straight line segments having a direction angle of 0 degree, 10 statistical intervals having an interval of 0.5m, and 1 statistical interval having an interval of 1m between adjacent straight line segments having a direction angle of 30 degrees, the suspected ground texture corresponding to the direction angle of 0 degree and the interval of 0.5m is the ground texture in the actual physical space.
It should be noted that the statistical screening condition may also be a statistical screening condition set according to a statistical result of the arrangement rule of the straight line segments in the ground coordinate system, which reflects the single ground texture, and a statistical result of the adjacent straight line segments in the ground coordinate system, which reflects the arrangement relationship between the multiple ground textures. For example, the straight line segments corresponding to the intervals with the largest statistical number are determined from the straight line segments corresponding to the direction angles with the largest statistical number, and then the ground texture in the actual physical space is determined.
In an example, the preset screening condition is a preset screening condition set according to a preset arrangement relationship among a plurality of ground lines. Examples of the arrangement relationship among the plurality of ground lines include intervals between adjacent ground lines and an angular relationship among the ground lines, and examples of the angular relationship include a perpendicular relationship, a parallel relationship, and the like. Specifically, the preset arrangement relationship among the plurality of ground lines may be directly input by the user, or may be pre-stored in the storage device, the user inputs the arrangement relationship by selecting the pre-stored arrangement relationship, the processing device sets a preset screening condition based on the input arrangement relationship, and the ground lines in the actual physical space may be determined based on the preset screening condition. For example, if the interval between adjacent ground grains directly input by the user is 0.5m, the preset screening condition is that the interval between adjacent parallel straight line segments is 0.5m, and the suspected ground grains corresponding to the straight line segments satisfying the condition in the ground coordinate system are the ground grains in the actual physical space. For another example, the angular relationship is a vertical relationship, the preset screening condition is mutually vertical straight line segments, and the suspected ground texture corresponding to the mutually vertical straight line segments in the ground coordinate system is the ground texture in the actual physical space.
In another example, the preset screening condition is a preset screening condition set according to a preset arrangement rule of the single ground grain. Specifically, the preset arrangement rule of the single ground texture is, for example, a length value of the single ground texture. The length value can be directly input by a user, or different length values are prestored in a storage device, the user inputs the length value by selecting the prestored length value, the processing device sets a preset screening condition based on the input length value, and the ground texture in the actual physical space can be determined based on the preset screening condition. For example, the preset length value is 0.5m, the preset screening condition is a straight line segment with a length value in a range of 0.5m-0.05m to 0.5m +0.05m, and the suspected ground texture corresponding to the straight line segment meeting the screening condition in the ground coordinate system is the ground texture in the actual physical space.
It should be noted that the preset screening condition may also be a screening condition set according to a preset arrangement relationship among a plurality of ground lines and a preset arrangement rule of a single ground line, for example, on the basis of a linear line segment meeting the preset interval between adjacent ground lines, a linear line segment meeting a preset length value in the linear line segment meeting the condition is determined, and then the ground line in the actual physical space is determined.
Based on the above ground texture screening conditions, the processing device may determine a straight line segment that meets the ground texture screening conditions in the ground coordinate system, and further determine a ground texture in an actual physical space corresponding to the straight line segment. Based on the determined ground texture, the processing device can obtain ground environment information such as position information of the ground texture relative to the mobile robot.
Further, the processing device can determine the main direction in the actual physical space according to the straight line segment meeting the ground texture screening condition, so that the mobile robot can move based on the main direction established by the ground texture.
In an embodiment, the processing device determines a main direction in the actual physical space according to the straight line segment meeting the ground texture screening condition, so that the mobile robot moves along the main direction.
The main direction is a direction parallel to the ground texture or a direction perpendicular to the ground texture. The main direction is determined based on the direction angle of the straight line segment which accords with the ground texture screening condition in the ground coordinate system. In an example, the direction angles of the straight line segments meeting the ground texture screening condition in the ground coordinate system are all the same (i.e. the recognized ground textures are parallel to each other), and the processing device directly takes the direction parallel to the ground texture or the direction perpendicular to the ground texture as the main direction. In another example, the direction angles of the straight line segments meeting the ground texture screening condition in the ground coordinate system have two types (i.e. the identified ground textures have two types of directions), and the processing device determines the main direction according to one type of directions.
Specifically, the processing device determines a direction angle of a straight line segment meeting ground texture screening conditions in the ground coordinate system, and further, the processing device can determine whether the current moving direction of the mobile robot in an actual physical space is perpendicular to or parallel to the direction of the ground texture based on the current moving direction of the mobile robot, and if so, the mobile robot is controlled to move along the current direction, namely, along a main direction; if not, the preset angle step length is taken as a unit angle, and the gradual adjustment is carried out according to the direction of the ground texture until the moving direction of the mobile robot is determined to be approximately parallel or vertical to the direction of the ground texture. Wherein the moving direction can be obtained by the current course angle of the mobile robot.
For example, the position of the mobile robot when the mobile robot is in the working state is a, the heading angle is B, the current position is X, the course angle is V, the origin of the ground coordinate system is A, the forward direction of the ordinate axis is the moving direction of the mobile robot when the mobile robot is in the starting working state, the forward direction of the abscissa axis is the right side of the mobile robot, the processing device can determine the direction angle Q of a straight line segment which accords with the ground texture screening condition in the ground coordinate system with A as the origin, the mobile robot can determine a direction angle L of the current moving direction of the mobile robot in the ground coordinate system with the A as the origin according to the difference between the heading angle B and the heading angle V, and determining whether the current moving direction of the mobile robot in the actual physical space is perpendicular to or parallel to the direction of the ground texture according to the difference between the direction angle Q and the direction angle L.
In certain embodiments, in the case where the primary direction is determined, the processing device also plans a navigation route based on the current position of the mobile robot.
Wherein the navigation route may include: the robot adjusts to a first route parallel to the main direction based on the current position, and traverses a second route of a work area starting from an end point of the first route. Wherein the working area is, for example, a cleaning area of a cleaning robot, a patrol area of a patrol robot, etc.
Taking the cleaning robot as an example, the processing device controls the cleaning robot to adjust to be parallel to the main direction based on the current position based on the main direction constructed by the ground texture, then, taking the current position of the cleaning robot as a starting point, planning the cleaning robot to move in the cleaning area by adopting a route such as a 'bow' shape and the like, and turning (for example, changing the course angle by 180 degrees) when the cleaning robot meets the wall surface, so that the cleaning robot can cover the area to be cleaned as completely as possible during the operation period, and the cleaning efficiency is improved.
The processing device can determine the main direction in the actual physical space based on the linear line segment meeting the ground texture screening condition, so that the mobile robot can sense the working environment of the mobile robot more completely, and the mobile robot moves along the main direction, and the coverage rate of the mobile robot in executing specific work can be improved.
In the case where the processing device determines a straight line segment that meets the ground grain screening condition, the processing device may also accurately identify the obstacle type of the obstacle located on the ground.
In one embodiment, the processing device removes an image corresponding to a straight line segment meeting the ground texture screening condition from the acquired image to obtain a target image; and identifying the type of the obstacle on the ground according to the target image.
The types of obstacles include, but are not limited to, at least one of: a type of winding, a type of island, a type of space separation, etc. The winding type includes a type of obstacle that is easily wound around a moving device (such as a roller) of the mobile robot, and a type of obstacle that is easily wound around a cleaning device such as a side brush or a rolling brush of the cleaning robot. For example, the winding type obstacles include, but are not limited to, the following categories: cables, ropes, ribbons, laces, cloth ends, plant vines, and the like. The island type includes a type of obstacle that the mobile robot is liable to circumvent without touching, such as the aforementioned chairs, shoes, socks, pet feces, ball, etc. The space division type includes a type of barrier for dividing a space to form different functional spaces, such as a wall, a door, a window, a wardrobe, a screen, and the like.
Specifically, the processing device removes the image corresponding to the straight line segment meeting the ground texture screening condition from the acquired image to obtain a target image. For example, the pixel value corresponding to the straight line segment meeting the ground texture screening condition is replaced with the pixel value of the pixel position adjacent to the straight line segment. And for another example, after the images corresponding to the linear segments meeting the ground texture screening conditions are directly removed, the residual images are spliced to obtain the target image. The processing device may identify an obstacle type of an obstacle located on the ground based on the target image.
In one embodiment, the processing device identifies image features of an obstacle in the target image, and determines an obstacle type of the obstacle according to the identified image features of the obstacle. Wherein the image features are characterized by image data (e.g., depth data, or color data) for matching to the type of obstacle, feature lines, feature points, and combinations thereof, including, for example, contour features, and/or shape features, and the like. For example, profile features associated with island types include, but are not limited to: the spatial extent occupied by the closed contour, local or global features on the contour of a typical island type obstacle. As another example, examples of the profile characteristics associated with the type of winding include a statistical profile width not greater than a predetermined width threshold (e.g., not greater than 1cm), the profile width describing a shortest distance between two contour lines at upper and lower edges of the obstacle. For another example, the profile features associated with the spatial separation type include a straight line feature, a broken line feature, a length of a straight line conforming to the straight line feature being greater than a predetermined threshold, and the like.
The shape features are geometric shapes, geometric shape combinations and the like composed or abstracted based on feature lines and/or feature points and are used for matching with various obstacle types. Wherein the geometry, combination of geometries may be based on the full outline or partial representation of the outline of the identified obstacle. For example, the shape features provided based on the island type include one or more of the following combinations: round, spherical, arc, square, cubic, pi-shaped, and the like. For example, the shape features of the shoe comprise a plurality of arc shapes connected end to end; the shape characteristics of the chair include pi-shape, eight-claw shape, etc. The shape features set based on the type of wrap include at least one or more of the following in combination: curved shapes, serpentine shapes, etc. The shape features arranged based on the space separation type comprise at least one or more of the following combinations: straight line shape, broken line shape, rectangular shape, etc.
And the processing device performs feature recognition on the target image by using the preset feature rules corresponding to the types of the obstacles to obtain the corresponding types of the obstacles.
In another embodiment, the processing device performs obstacle recognition on the target image using a classifier trained through machine learning to determine an obstacle type of the obstacle.
Wherein the classifier is configured to identify at least one of a type of winding, a type of islanding, and a type of spatial separation in the image, and to determine a location of the identified obstacle of the corresponding type in the image. To this end, the number of classifiers used may be one or more. For example, a plurality of classifiers correspondingly identify each type of obstacle in a cascade manner. For another example, the multiple classifiers correspondingly identify each type of obstacle in a parallel identification manner. As another example, a single classifier identifies multiple classes of obstacles.
Examples of the classifier include a trained Convolutional Neural Network (CNN). The convolutional neural network is an architecture of a deep neural network, and is closely related to image processing. The weight sharing network structure of the convolutional neural network is more similar to a biological neural network, the complexity of a network model is reduced, and the number of weights is reduced, and the network structure has characteristic invariance to image translation, scaling, inclination or other form deformation.
Examples of the machine learning method include: training the classifier by using a part of sample images to obtain parameters inside each nerve layer and among the nerve layers; and carrying out back propagation processing on the obtained classifier through another part of sample images to verify the probability of correct classification of the trained classifier, and obtaining the classifier for implanting the mobile robot when the obtained probability reaches a preset design threshold. In order to improve the probability of correct classification of the classifier, the sample images used for training comprise positive sample images and negative sample images.
In some embodiments, the process of machine learning further includes a corresponding image pre-processing of the sample images. The image pre-processing includes but is not limited to: and (3) intercepting, compressing, carrying out gray scale processing, image filtering and/or noise filtering processing and the like on the sample image. Correspondingly, the processing device carries out image preprocessing on the image to be recognized and inputs the image after the image preprocessing into the classifier to obtain a recognition result. The process of image preprocessing on the image to be recognized includes but is not limited to: and (3) clipping, compressing, thresholding, image filtering, noise filtering and the like are carried out on the image to be identified.
It is worth noting that: in some embodiments, the aforementioned classifier for recognizing the winding type obstacle may be pre-stored in the storage device. In one implementation, the classifier is written to the storage device before the mobile robot is sold to the user (e.g., before the mobile robot is manufactured out of the factory, or before the mobile robot is delivered to various points of sale, or before the mobile robot is sold to an end user at a point of sale). In another implementation manner, the classifier may also perform an update operation after the mobile robot is networked and establishes a communication connection with a corresponding vendor server or application server. In another implementation manner, the classifier is stored in a server system in remote communication with the mobile robot, and when image recognition is performed, the processing device may send the acquired at least one image to the server system, and the classifier in the server system performs recognition and feeds back the recognition result to the processing device of the mobile robot.
In some embodiments, the processing device of the mobile robot controls the navigational movement of the mobile robot based on a preset control strategy corresponding to the identified type of obstacle.
The method for controlling the navigation movement of the mobile robot includes the following steps:
in the case where the resulting obstacle type is a winding type, for a mobile robot that performs a moving operation by driving the traveling wheels, when passing over, for example, a winding type of vine, the loop shape formed by the vine may trap the traveling wheels therein, or uneven ground caused by the height of the vine may trip over the mobile robot. For this purpose, the processing means controls the mobile robot to change the moving direction when approaching the obstacle, in a case where the obtained obstacle type is a winding type.
Here, the processing device may calculate a relative positional relationship between the obtained obstacle and the mobile robot, construct a virtual wall that does not touch the obstacle, and re-plan the navigation route based on the virtual wall. For example, the direction of movement is changed when a threshold distance from the virtual wall is present. Wherein the threshold distance is 1-5cm for example.
For example, after recognizing the winding type obstacle, the processing device in the cleaning robot sends a control instruction to the moving device in combination with information such as the size and/or the position of the winding type obstacle, so as to control the cleaning robot to move not along the original moving route so as to avoid touching the winding type obstacle, and the area where the winding type obstacle is located is not cleaned, so as to ensure that the cleaning robot is not entangled by the winding type obstacle, so that the cleaning robot cannot move, work or fall over.
And for the mobile robot with stronger driving force or certain protection capability, determining and controlling the mobile robot to change or not change the planned navigation route by combining the contour width, the occupied space height and the like of the obstacle of the identified winding type.
And in the case that the obtained type of the obstacles is an island type, controlling the mobile robot to bypass the corresponding obstacles based on the relative spatial position between the corresponding obstacles and the mobile robot.
And controlling the mobile robot to decelerate and move when approaching the obstacle until the obstacle is touched based on the relative space position between the corresponding obstacle and the mobile robot under the condition that the obtained obstacle type is a space separation type.
The processing device can accurately and effectively identify the type of the obstacle on the ground according to the straight line segment which accords with the ground texture screening condition, particularly can improve the detection precision of the obstacle of the winding type, and further effectively controls the navigation movement of the mobile robot so as to avoid the navigation movement of the mobile robot being hindered by various obstacle types.
Based on the method for determining the working environment of the mobile robot shown in fig. 5, the present application further provides a control system of the mobile robot, please refer to fig. 8, fig. 8 is a schematic diagram of the control system of the mobile robot in another embodiment of the present application, and as shown in the figure, the control system 40 includes: a straight line segment identification module 41, a conversion module 42 and a ground environment information determination module 43; the straight line segment identification module 41 is configured to acquire an image captured by an image capturing device of the mobile robot in a working state and identify a straight line segment in the image according to the acquired image; the acquired image comprises an image of the ground in the actual physical space where the mobile robot is located; the conversion module 42 is configured to convert the identified straight line segments in the image into a ground coordinate system; and the ground environment information determining module 43 is configured to determine, according to the straight line segment meeting the ground texture screening condition in the ground coordinate system, the ground environment information in the actual physical space where the mobile robot is located.
The straight line segment identifying module 41, the converting module 42, and the ground environment information determining module 43 in the control system cooperatively perform the steps S110 to S130 according to the functions of the modules described above, which are not described herein again.
The control system can be according to the image that image capture device 50 was shot confirms ground line in the actual physical space, further, mobile robot can also confirm according to the ground line of discerning the barrier type of main direction, perhaps barrier, and then can make mobile robot removes in order to improve mobile robot and carry out the coverage of specific work, or make processing apparatus accurately discerns the barrier type of the barrier that is located subaerial and removes in order to control mobile robot's navigation, realizes that mobile robot can high-efficient, completely perceives its operational environment's purpose.
Based on the method for determining the working environment of the mobile robot shown in fig. 5 in the present application, the present application also provides a mobile robot, please refer to fig. 9, which is a schematic structural diagram of the mobile robot in an embodiment of the present application, as shown in the figure, the mobile robot includes a storage device 11, a processing device 13, an image capturing device 50, and a mobile device 60.
The storage means 11 and the processing means 13 may correspond to the storage means and the processing means in the control system 10 mentioned in the foregoing fig. 5, and will not be described in detail here. The processing device 13 is connected to the image capturing device 50 and the mobile device 60 by using the interface device 12 in the control system 10.
The image capturing apparatus 50 is used for capturing images, wherein the image capturing apparatus 50 and the assembling manner thereof are the same as or similar to those described above, and will not be described in detail herein.
The moving means 60 are connected to the processing means 13 for controlled execution of moving operations. In practical embodiments, the moving device 60 may include a traveling mechanism and a driving mechanism, wherein the traveling mechanism may be disposed at the bottom of the mobile robot, and the driving mechanism is disposed in the housing of the mobile robot. Further, the traveling mechanism may be in a traveling wheel manner, and in one implementation, the traveling mechanism may include at least two universal traveling wheels, for example, and the at least two universal traveling wheels realize the movement of advancing, retreating, steering, rotating and the like. In other implementations, the travel mechanism may, for example, comprise a combination of two straight travel wheels and at least one auxiliary steering wheel, wherein the two straight travel wheels are primarily used for forward and reverse travel in the case where the at least one auxiliary steering wheel is not engaged, and wherein steering and rotational etc. movement is achieved in the case where the at least one auxiliary steering wheel is engaged and engaged with the two straight travel wheels. The driving mechanism can be, for example, a driving motor, and the driving motor can be used for driving a traveling wheel in the traveling mechanism to move. In a specific implementation, the driving motor can be a reversible driving motor, for example, and a speed change mechanism can be further arranged between the driving motor and the wheel axle of the travelling wheel.
The working process of the mobile robot is as follows: the processing device 13 determines the ground texture in the actual physical space according to the image captured by the image capturing device 50, and further, the mobile robot may determine the main direction or the type of the obstacle according to the recognized ground texture, so that the mobile robot may move along the main direction to improve the coverage rate of the mobile robot for performing specific work, or the processing device may accurately recognize the type of the obstacle on the ground to control the navigation movement of the mobile robot, thereby achieving the purpose that the mobile robot can efficiently and completely sense the working environment.
In some embodiments, the mobile robot is a cleaning robot, and the cleaning robot further includes a cleaning device (not shown) for performing a cleaning operation during movement of the mobile robot, such as: sweeping or mopping.
The cleaning device includes a mopping assembly (not shown) for controlled performance of a mopping operation, and/or a sweeping assembly (not shown). The mopping assembly comprises: a mop pad, a mop pad carrier, a spraying device, a watering device, etc. The mopping assembly is used for controlling to execute mopping operation in the mopping mode. The sweeping assembly is used for controlled sweeping operation. The cleaning assembly can comprise an edge brush, a rolling brush and a motor, wherein the edge brush motor is located at the bottom of the shell, the rolling brush is used for controlling the edge brush motor of the edge brush, the rolling brush motor of the rolling brush is used for controlling the rolling brush, the number of the edge brushes can be at least two and are respectively and symmetrically arranged on two opposite sides of the front end of the shell of the mobile robot, and the edge brushes can adopt rotary edge brushes and can rotate under the control of the edge brush motor. The rolling brush is positioned in the middle of the bottom of the mobile robot and can rotate under the control of the rolling brush motor to clean, so that garbage is swept into the cleaning floor and conveyed into the dust collection assembly through the collection inlet. The dust collection assembly can comprise a dust collection chamber and a fan, wherein the dust collection chamber is arranged in the shell, and the fan is used for providing suction force to suck the garbage into the dust collection chamber. The cleaning device is not limited thereto.
In the case where the mobile robot includes a cleaning device, one or more of the side brush, the roll brush, and the blower of the cleaning robot may be controlled to be in an inoperative state during the cleaning robot passes the obstacle according to a preset moving route. For example, the processing device controls the side brush, the rolling brush and the fan to be in a non-working state during the cleaning robot passes through the winding type obstacle according to the preset moving route.
The mobile robot can be according to the image that image pickup device 50 was shot confirms ground line in the actual physical space, further, mobile robot can also confirm the main direction according to the ground line of discerning, or the barrier type of barrier, and then can make mobile robot removes in order to improve mobile robot and carry out the coverage rate of specific work, or make processing apparatus accurately discerns the barrier type of the barrier that is located subaerial and removes in order to control mobile robot's navigation, realizes that mobile robot can high-efficient, completely feel its operational environment's purpose.
The present application also provides a computer-readable and writable storage medium storing at least one program that, when invoked, executes and implements at least one embodiment described above with respect to the method of determining a mobile robot working environment illustrated in fig. 5.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for enabling a mobile robot equipped with the storage medium to perform all or part of the steps of the method according to the embodiments of the present application.
In the embodiments provided herein, the computer-readable and writable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a USB flash drive, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In one or more exemplary aspects, the functions described in the computer program of the methods described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may be located on a tangible, non-transitory computer-readable and/or writable storage medium. Tangible, non-transitory computer readable and writable storage media may be any available media that can be accessed by a computer.
The flowcharts and block diagrams in the figures described above of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above examples are merely illustrative of the principles and effects of the present application and are not intended to limit the present application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (18)

1. A method of determining a working environment of a mobile robot, the mobile robot including an image capture device, the method comprising:
acquiring an image shot by the image shooting device of the mobile robot in a working state and identifying a straight line segment in the image according to the acquired image; the acquired image comprises an image of the ground in the actual physical space where the mobile robot is located;
converting the identified straight line segments in the image into a ground coordinate system;
and determining the ground environment information of the mobile robot in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
2. The method of determining the working environment of a mobile robot as claimed in claim 1, wherein the step of identifying straight line segments in the image from the acquired image comprises:
the straight line segments in the image are identified from the acquired image by a straight line segment detection method.
3. The method of claim 1, wherein the image is a color image, and the step of converting the identified line segments in the image into a ground coordinate system comprises:
converting the identified straight line segment in one image into a ground coordinate system according to preset physical reference information; or
And converting the identified straight line segments in the multiple images into a ground coordinate system according to preset physical reference information and the poses of the mobile robot when different images are obtained.
4. The method of claim 1, wherein the image is a depth image, and the step of transforming the identified line segments in the image into a ground coordinate system comprises:
converting the identified straight line segments in one image into a ground coordinate system according to the pixel values of the identified straight line segments and the pixel positions of the pixel values in the image; or
And converting the identified straight line segments in the plurality of images into a ground coordinate system according to the pixel values of the identified straight line segments, the pixel positions of the pixel values in the corresponding images and the poses of the mobile robot when different images are acquired.
5. The method of determining a mobile robot working environment of claim 1, wherein the ground grain screening conditions include at least one of: counting the screening conditions and presetting the screening conditions.
6. The method of determining a mobile robot working environment of claim 5, wherein the statistical screening conditions include at least one of:
the statistical screening conditions are set according to the statistical results of the arrangement rules of the single ground lines reflected by the straight line segments in the ground coordinate system;
and the statistical screening conditions are set according to the statistical results which reflect the arrangement relation among the plurality of ground lines and are presented in the ground coordinate system by the adjacent straight line segments.
7. The method of determining a mobile robot working environment according to claim 5, wherein the preset screening conditions include at least one of:
the method comprises the steps of setting a preset screening condition according to a preset arrangement relation among a plurality of ground lines;
and the preset screening conditions are set according to the preset arrangement rule of the single ground texture.
8. The method of determining a mobile robot working environment of claim 1, further comprising:
and determining a ground image describing the ground in the acquired image based on the shooting angle of the image shooting device, so that the mobile robot can identify a straight line segment in the ground image according to the ground image.
9. The method of determining a mobile robot working environment of claim 1, wherein the image is a depth image, the method further comprising:
and clustering the three-dimensional point cloud data corresponding to the depth image to obtain three-dimensional point cloud data describing the ground, so that the mobile robot can identify the straight line segments in the ground image according to the ground image corresponding to the three-dimensional point cloud data describing the ground.
10. The method of claim 1, wherein the step of determining the ground environment information of the mobile robot in the physical space according to the straight line segment meeting the ground texture screening condition in the ground coordinate system comprises:
determining a main direction in an actual physical space according to the linear segment meeting the ground texture screening condition so as to enable the mobile robot to move along the main direction; and/or
Removing the image corresponding to the straight line segment which accords with the ground texture screening condition from the obtained image to obtain a target image; identifying the type of the obstacle on the ground according to the target image;
and the main direction is determined based on the direction angle of the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
11. The method of determining a mobile robot working environment of claim 10, further comprising: in case the main direction is determined, a navigation route is planned based on the current position of the mobile robot.
12. The method of determining a mobile robotic work environment of claim 10, wherein the step of identifying an obstacle type of an obstacle located on the ground from the target image comprises:
identifying image characteristics of an obstacle in the target image, and determining the obstacle type of the obstacle according to the identified image characteristics of the obstacle; or
And performing obstacle recognition on the target image by using a classifier trained through machine learning to determine the obstacle type of the obstacle.
13. The method of determining a mobile robot working environment according to claim 10, wherein the obstacle type comprises at least one of: type of winding, type of islanding, type of space division.
14. A control system of a mobile robot, characterized in that the control system comprises:
the linear segment identification module is used for acquiring an image shot by the image shooting device of the mobile robot in a working state and identifying a linear segment in the image according to the acquired image; the acquired image comprises an image of the ground in the actual physical space where the mobile robot is located;
the conversion module is used for converting the identified straight line segments in the image into a ground coordinate system;
and the ground environment information determining module is used for determining the ground environment information of the mobile robot in the actual physical space according to the straight line segment which accords with the ground texture screening condition in the ground coordinate system.
15. A control system of a mobile robot, characterized in that the control system comprises:
interface means for acquiring an image;
a storage device for storing at least one program;
processing means connected to the interface means and the storage means for calling and executing the at least one program to coordinate the interface means, the storage means to execute and implement the method for determining the working environment of a mobile robot as claimed in any one of claims 1 to 13.
16. A mobile robot, comprising:
an image pickup device for picking up an image;
the mobile device is used for controlled execution of mobile operation;
a storage device for storing at least one program;
processing means connected to the moving means, the storage means and the image capturing means for calling and executing the at least one program to coordinate the moving means, the storage means and the image capturing means to execute and implement the method for determining the working environment of the mobile robot according to any one of claims 1 to 13.
17. The mobile robot of claim 16, wherein the mobile robot is a cleaning robot.
18. A computer-readable storage medium, characterized by storing at least one program which, when invoked, executes and implements a method of determining a working environment of a mobile robot according to any one of claims 1-13.
CN202010687725.8A 2020-07-16 2020-07-16 Method for determining working environment of mobile robot, control system and storage medium Pending CN112034837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010687725.8A CN112034837A (en) 2020-07-16 2020-07-16 Method for determining working environment of mobile robot, control system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010687725.8A CN112034837A (en) 2020-07-16 2020-07-16 Method for determining working environment of mobile robot, control system and storage medium

Publications (1)

Publication Number Publication Date
CN112034837A true CN112034837A (en) 2020-12-04

Family

ID=73579580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010687725.8A Pending CN112034837A (en) 2020-07-16 2020-07-16 Method for determining working environment of mobile robot, control system and storage medium

Country Status (1)

Country Link
CN (1) CN112034837A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711257A (en) * 2020-12-25 2021-04-27 珠海市一微半导体有限公司 Robot edge method based on single-point TOF, chip and mobile robot
CN113109835A (en) * 2021-03-16 2021-07-13 联想(北京)有限公司 Information processing method and electronic equipment
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN114569004A (en) * 2022-02-22 2022-06-03 杭州萤石软件有限公司 Traveling direction adjustment method, mobile robot system, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
KR20070108815A (en) * 2006-05-08 2007-11-13 삼성전자주식회사 Cleaning robot using floor image information and control method using the same
CN105892457A (en) * 2015-02-13 2016-08-24 美国iRobot公司 Mobile Floor-Cleaning Robot With Floor-Type Detection
CN106998980A (en) * 2014-12-10 2017-08-01 伊莱克斯公司 Floor type is detected using laser sensor
CN109074084A (en) * 2017-08-02 2018-12-21 珊口(深圳)智能科技有限公司 Control method, device, system and the robot being applicable in of robot
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN110688936A (en) * 2019-09-24 2020-01-14 深圳市银星智能科技股份有限公司 Method, machine and storage medium for representing characteristics of environment image
CN110801180A (en) * 2018-08-03 2020-02-18 速感科技(北京)有限公司 Operation method and device of cleaning robot
CN111358359A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Line avoiding method and device for robot, chip and sweeping robot
CN111374597A (en) * 2018-12-28 2020-07-07 珠海市一微半导体有限公司 Method and device for avoiding line of cleaning robot, storage medium and cleaning robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
KR20070108815A (en) * 2006-05-08 2007-11-13 삼성전자주식회사 Cleaning robot using floor image information and control method using the same
CN106998980A (en) * 2014-12-10 2017-08-01 伊莱克斯公司 Floor type is detected using laser sensor
CN105892457A (en) * 2015-02-13 2016-08-24 美国iRobot公司 Mobile Floor-Cleaning Robot With Floor-Type Detection
CN109074084A (en) * 2017-08-02 2018-12-21 珊口(深圳)智能科技有限公司 Control method, device, system and the robot being applicable in of robot
CN110801180A (en) * 2018-08-03 2020-02-18 速感科技(北京)有限公司 Operation method and device of cleaning robot
CN111358359A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Line avoiding method and device for robot, chip and sweeping robot
CN111374597A (en) * 2018-12-28 2020-07-07 珠海市一微半导体有限公司 Method and device for avoiding line of cleaning robot, storage medium and cleaning robot
CN110622085A (en) * 2019-08-14 2019-12-27 珊口(深圳)智能科技有限公司 Mobile robot and control method and control system thereof
CN110688936A (en) * 2019-09-24 2020-01-14 深圳市银星智能科技股份有限公司 Method, machine and storage medium for representing characteristics of environment image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711257A (en) * 2020-12-25 2021-04-27 珠海市一微半导体有限公司 Robot edge method based on single-point TOF, chip and mobile robot
CN113109835A (en) * 2021-03-16 2021-07-13 联想(北京)有限公司 Information processing method and electronic equipment
CN113109835B (en) * 2021-03-16 2023-08-18 联想(北京)有限公司 Information processing method and electronic equipment
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN114569004A (en) * 2022-02-22 2022-06-03 杭州萤石软件有限公司 Traveling direction adjustment method, mobile robot system, and electronic device
WO2023160305A1 (en) * 2022-02-22 2023-08-31 杭州萤石软件有限公司 Travelling direction adjusting method, mobile robot system and electronic device
CN114569004B (en) * 2022-02-22 2023-12-01 杭州萤石软件有限公司 Travel direction adjustment method, mobile robot system and electronic device

Similar Documents

Publication Publication Date Title
US11042760B2 (en) Mobile robot, control method and control system thereof
AU2017228620B2 (en) Autonomous coverage robot
US10705535B2 (en) Systems and methods for performing simultaneous localization and mapping using machine vision systems
CN112034837A (en) Method for determining working environment of mobile robot, control system and storage medium
US10611023B2 (en) Systems and methods for performing occlusion detection
CN109947109B (en) Robot working area map construction method and device, robot and medium
CN106537186B (en) System and method for performing simultaneous localization and mapping using a machine vision system
CN106489104B (en) System and method for use of optical odometry sensors in a mobile robot
CN112867424B (en) Navigation and cleaning area dividing method and system, and moving and cleaning robot
JP2019523496A (en) Estimate dimensions of enclosed space using multi-directional camera
CN111035327A (en) Cleaning robot, carpet detection method, and computer-readable storage medium
CN211933898U (en) Cleaning robot
CN111813103B (en) Control method, control system and storage medium for mobile robot
AU2015224421B2 (en) Autonomous coverage robot
AU2013338354B9 (en) Autonomous coverage robot
CN117830818A (en) Method and device for constructing semantic map and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination