CN113313052B - Cliff area detection and mobile robot control method and device and mobile robot - Google Patents

Cliff area detection and mobile robot control method and device and mobile robot Download PDF

Info

Publication number
CN113313052B
CN113313052B CN202110660296.XA CN202110660296A CN113313052B CN 113313052 B CN113313052 B CN 113313052B CN 202110660296 A CN202110660296 A CN 202110660296A CN 113313052 B CN113313052 B CN 113313052B
Authority
CN
China
Prior art keywords
image
target
area
target image
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110660296.XA
Other languages
Chinese (zh)
Other versions
CN113313052A (en
Inventor
苏辉
蒋海青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202110660296.XA priority Critical patent/CN113313052B/en
Publication of CN113313052A publication Critical patent/CN113313052A/en
Application granted granted Critical
Publication of CN113313052B publication Critical patent/CN113313052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the invention provides a cliff area detection and mobile robot control method and device and a mobile robot, and relates to the technical field of image processing. The cliff area detection method comprises the following steps: acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, the target active area being: the mobile robot moves according to the current moving direction to reach an active area; determining image features of the target image; based on the image features, a detection result is determined as to whether a cliff region exists in the target activity region. Compared with the prior art, the scheme provided by the embodiment of the invention can be used for detecting whether the cliff area exists in the moving area which is about to be reached by the mobile robot, so that the mobile robot is prevented from falling into the cliff area to be damaged.

Description

Cliff area detection and mobile robot control method and device and mobile robot
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a cliff area detection and mobile robot control method and apparatus, and a mobile robot.
Background
With the continuous development of artificial intelligence technology, mobile robots are widely used in more and more technical fields. Here, the mobile robot means: a machine device that is mobile and capable of automatically performing work. Such as a parcel sorting robot, a sweeping robot, a tour guide robot, and the like.
In the working environment of mobile robots, an active area with a height difference is usually encountered. For example, when the sweeping robot cleans the edge of a stair, a height difference exists between the area where the sweeping robot is located and the step area of the stair. In this case, an active area having a height difference from an area where the mobile robot is located may be referred to as a cliff area.
Obviously, during the movement of the mobile robot, it is necessary to detect whether a cliff area exists in an active area to be reached by the mobile robot, so as to prevent the mobile robot from falling into the cliff area and causing damage to the mobile robot by controlling the movement speed of the mobile robot and the like.
Based on this, there is a need for a cliff area detection method to detect whether a cliff area exists in an active area to be reached by a mobile robot, thereby preventing the mobile robot from falling into the cliff area, causing damage to the mobile robot.
Disclosure of Invention
The embodiment of the invention aims to provide a cliff area detection method and device, a mobile robot control method and device, and a mobile robot and storage medium, so as to detect whether a cliff area exists in an active area which is about to be reached by the mobile robot, and prevent the mobile robot from falling into the cliff area to cause damage to the mobile robot. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a cliff area detection method, where the method includes:
acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, the target active area being: the mobile robot moves according to the current moving direction to reach an active area;
Determining image features of the target image;
based on the image features, a detection result is determined as to whether a cliff region exists in the target activity region.
Optionally, in a specific implementation manner, the step of determining the image feature of the target image includes:
Determining each item mark edge line in the target image;
And respectively extracting the image information of the appointed type in the image areas at the two sides of the target edge line as a group of image features of the target image aiming at each target edge line.
Optionally, in a specific implementation manner, the step of determining, based on the image feature, a detection result about whether a cliff area exists in the target activity area includes:
Judging whether at least one group of target image features exist in each group of image features; wherein the difference of the two image information included in each group of target image features meets a preset difference condition;
if so, determining that the cliff area exists in the target activity area as a detection result.
Optionally, in a specific implementation manner, the step of determining the image feature of the target image includes:
Identifying each characteristic point in the target image to obtain each image characteristic of the target image;
Wherein, each characteristic point is: and the point on the fixed object in the scene where the mobile robot is located is a corresponding point in the target image.
Optionally, in a specific implementation manner, before the step of identifying each feature point in the target image, the method further includes:
Determining each item mark edge line in the target image;
the step of identifying each feature point in the target image to obtain each image feature of the target image comprises the following steps:
and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
Optionally, in a specific implementation manner, the step of determining, based on the image feature, a detection result about whether a cliff area exists in the target activity area includes:
acquiring a previous frame of image of the target image; wherein, last frame image is: a two-dimensional image having a collection time preceding the collection time of the target image;
For each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the movement distance corresponding to the feature point under the condition that the target point matched with the feature point exists in the previous frame image; wherein the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
based on the calculated respective movement distances, a detection result as to whether or not a cliff region exists in the target activity region is determined.
Optionally, in a specific implementation manner, for each feature point, determining whether a target point matched with the feature point exists in the previous frame of image includes:
For each feature point, a window area of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and whether a target point matched with the feature point exists in the determined window area is judged.
Optionally, in a specific implementation manner, the step of determining, based on the calculated movement distances, a detection result about whether the cliff area exists in the target activity area includes:
if the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that a cliff area exists in the target activity area as a detection result; wherein the preset threshold value is matched with the current moving speed of the mobile robot; or alternatively
If the target distance is smaller than the preset threshold value, determining that a cliff area exists in the target activity area as a detection result; wherein the target distance comprises: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
Optionally, in a specific implementation manner, the step of determining the edge line of each item in the target image includes:
performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
Determining each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines as each item of target edge line in the target image;
wherein, the first preset condition is: belongs to a straight line; the second preset condition is: is positioned in an image area of the target active area in the target image.
Optionally, in a specific implementation manner, before the step of determining, as each target edge line of the target image, each edge line of the initial edge lines that meets the first preset condition and/or the second preset condition, the method further includes:
Connecting edge lines meeting preset connection conditions in the initial edge lines to obtain communication edge lines;
the step of determining each edge line satisfying the first preset condition and/or the second preset condition in each initial edge line as each target edge line in the target image includes:
And determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
Optionally, in a specific implementation manner, before the step of extracting edges of the target image by using a preset edge extraction manner to obtain each initial edge line in the target image, the method further includes:
Determining an image area of the target active area in the target image;
The step of extracting the edge of the target image by using a preset edge extraction mode to obtain each initial edge line in the target image comprises the following steps:
And extracting the edges of the determined image area by using a preset edge extraction mode to obtain each initial edge line in the target image.
In a second aspect, an embodiment of the present invention provides a mobile robot control method, including:
acquiring a detection result regarding whether a cliff area exists in a target activity area of the mobile robot; wherein the target activity area is: the mobile robot moves according to the current moving direction, the moving area is reached, and the detection result is determined by using any cliff area detection method provided in the first aspect;
And controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
In a third aspect, an embodiment of the present invention provides a cliff area detection apparatus, the apparatus including:
An image acquisition module for acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, the target active area being: the mobile robot moves according to the current moving direction to reach an active area;
and the image processing module is used for determining the image characteristics of the target image and determining a detection result about whether the cliff area exists in the target activity area or not based on the image characteristics.
Optionally, in a specific implementation manner, the image processing module determines an image feature of the target image, including:
Determining each item mark edge line in the target image;
And respectively extracting the image information of the appointed type in the image areas at the two sides of the target edge line as a group of image features of the target image aiming at each target edge line.
Optionally, in a specific implementation manner, the determining, by the image processing module, a detection result regarding whether a cliff area exists in the target active area based on the image feature includes:
Judging whether at least one group of target image features exist in each group of image features; wherein the difference of the two image information included in each group of target image features meets a preset difference condition;
if so, determining that the cliff area exists in the target activity area as a detection result.
Optionally, in a specific implementation manner, the image processing module determines an image feature of the target image, including:
Identifying each characteristic point in the target image to obtain each image characteristic of the target image;
Wherein, each characteristic point is: and the point on the fixed object in the scene where the mobile robot is located is a corresponding point in the target image.
Optionally, in a specific implementation manner, the image processing module is further configured to:
Determining each item mark edge line in the target image before identifying each feature point in the target image;
the image processing module identifies each feature point in the target image to obtain each image feature of the target image, and the image processing module comprises the following steps:
and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
Optionally, in a specific implementation manner, the determining, by the image processing module, a detection result regarding whether a cliff area exists in the target active area based on the image feature includes:
acquiring a previous frame of image of the target image; wherein, last frame image is: a two-dimensional image having a collection time preceding the collection time of the target image;
For each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the movement distance corresponding to the feature point under the condition that the target point matched with the feature point exists in the previous frame image; wherein the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
based on the calculated respective movement distances, a detection result as to whether or not a cliff region exists in the target activity region is determined.
Optionally, in a specific implementation manner, the image processing module is further configured to:
For each feature point, a window area of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and whether a target point matched with the feature point exists in the determined window area is judged.
Optionally, in a specific implementation manner, the determining, by the image processing module, a detection result regarding whether the cliff area exists in the target activity area based on the calculated movement distances includes:
if the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that a cliff area exists in the target activity area as a detection result; wherein the preset threshold value is matched with the current moving speed of the mobile robot; or alternatively
If the target distance is smaller than the preset threshold value, determining that a cliff area exists in the target activity area as a detection result; wherein the target distance comprises: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
Optionally, in a specific implementation manner, the determining, by the image processing module, each item target edge line in the target image includes:
performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
Determining each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines as each item of target edge line in the target image;
wherein, the first preset condition is: belongs to a straight line; the second preset condition is: is positioned in an image area of the target active area in the target image.
Optionally, in a specific implementation manner, the image processing module is further configured to:
Before each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines is determined to be each target edge line of the target image, connecting the edge lines meeting the preset connection condition in the initial edge lines to obtain each communication edge line;
The image processing module determines each edge line meeting a first preset condition and/or a second preset condition in each initial edge line as each item target edge line in the target image, and the image processing module comprises:
And determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
Optionally, in a specific implementation manner, the image processing module is further configured to:
before the edge extraction is carried out on the target image by utilizing a preset edge extraction mode to obtain each initial edge line in the target image, determining an image area of the target active area in the target image;
The image processing module performs edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image, and the method comprises the following steps:
And extracting the edges of the determined image area by using a preset edge extraction mode to obtain each initial edge line in the target image.
In a fourth aspect, an embodiment of the present invention provides a mobile robot control device, including:
A result acquisition module for acquiring a detection result regarding whether a cliff area exists in a target active area of the mobile robot; wherein the target activity area is: the mobile robot moves according to the current moving direction, the moving area is reached, and the detection result is determined by using any cliff area detection device provided in the first aspect;
And the movement control module is used for controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
In a fifth aspect, an embodiment of the present invention provides a mobile robot, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
And a processor for implementing the steps of any one of the cliff area detection methods provided in the first aspect and/or the steps of any one of the mobile robot control methods provided in the second aspect when executing the program stored in the memory.
In a sixth aspect, an embodiment of the present invention provides a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any one of the cliff area detection methods provided in the first aspect and/or the steps of any one of the mobile robot control methods provided in the second aspect.
In a seventh aspect, embodiments of the present invention provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of any one of the cliff area detection methods provided in the first aspect and/or the steps of any one of the mobile robot control methods provided in the second aspect.
The embodiment of the invention has the beneficial effects that:
As can be seen from the above, when the solution provided by the embodiment of the present invention is applied to detect whether a cliff area exists in a target activity area that is to be achieved when the mobile robot moves in the current movement direction during the movement of the mobile robot, a target image including the target activity area of the mobile robot may be first acquired, and further, the image characteristics of the target image may be determined, so that whether the cliff area exists in the target activity area may be detected by using the determined image characteristics, so as to determine a detection result about whether the cliff area exists in the target activity area. Thus, after the detection result is obtained, the movement of the mobile robot can be controlled based on the detection result.
Based on the above, by applying the scheme provided by the embodiment of the invention, the detection of whether the cliff area exists in the target active area can be realized by utilizing the acquired two-dimensional image of the target active area comprising the mobile robot, so that the movement of the mobile robot can be controlled according to the detection result, and the mobile robot is prevented from falling into the cliff area to cause damage to the mobile robot.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a first cliff area detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a second cliff area detection method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a third cliff area detection method according to an embodiment of the present invention;
FIG. 4 is a flow chart of one implementation of S102A of FIGS. 2 and 3;
FIG. 5 is a flow chart of another implementation of S102A in FIGS. 2 and 3;
fig. 6 is a schematic flow chart of a fourth cliff area detection method according to an embodiment of the present invention;
fig. 7 is a schematic flow chart of a fifth cliff area detection method according to an embodiment of the present invention;
Fig. 8 is a schematic flow chart of a mobile robot control method according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of a cliff detection device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a mobile robot control device according to an embodiment of the present invention;
Fig. 11 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention;
fig. 12 is a schematic diagram of an image area of a target active area in a target image according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by the person skilled in the art based on the present application are included in the scope of protection of the present application.
In the moving process of the mobile robot, whether a cliff area exists in an active area to be reached by the mobile robot needs to be detected, so that the mobile robot is prevented from falling into the cliff area to be damaged by controlling the moving speed of the mobile robot and the like. Based on this, there is a need for a cliff area detection method to detect whether a cliff area exists in an active area to be reached by a mobile robot, thereby preventing the mobile robot from falling into the cliff area, causing damage to the mobile robot.
In order to solve the technical problems, the embodiment of the invention provides a cliff area detection method.
The method can be applied to various moving scenes where various moving robots are located, for example, when the sweeping robot cleans beside stairs, and for example, when the tour guide robot moves in an environment where steps exist. Moreover, the method can be applied to the mobile robot and can also be applied to other electronic equipment capable of communicating with the mobile robot, for example, a management server of the mobile robot, which is reasonable.
The cliff detection method provided by the embodiment of the invention can comprise the following steps:
acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, the target active area being: the mobile robot moves according to the current moving direction to reach an active area;
Determining image features of the target image;
based on the image features, a detection result is determined as to whether a cliff region exists in the target activity region.
As can be seen from the above, when the solution provided by the embodiment of the present invention is applied to detect whether a cliff area exists in a target activity area that is to be achieved when the mobile robot moves in the current movement direction during the movement of the mobile robot, a target image including the target activity area of the mobile robot may be first acquired, and further, the image characteristics of the target image may be determined, so that whether the cliff area exists in the target activity area may be detected by using the determined image characteristics, so as to determine a detection result about whether the cliff area exists in the target activity area. Thus, after the detection result is obtained, the movement of the mobile robot can be controlled based on the detection result.
Based on the above, by applying the scheme provided by the embodiment of the invention, the detection of whether the cliff area exists in the target active area can be realized by utilizing the acquired two-dimensional image of the target active area comprising the mobile robot, so that the movement of the mobile robot can be controlled according to the detection result, and the mobile robot is prevented from falling into the cliff area to cause damage to the mobile robot.
The following describes a cliff area detection method according to an embodiment of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a cliff area detection method according to an embodiment of the present invention, as shown in fig. 1, the method may include the following steps S101-S103.
S101: acquiring a target image including a target active area of the mobile robot;
Wherein, the target image is: two-dimensional image, target activity area is: when the mobile robot moves according to the current moving direction, an active area is reached;
In order to prevent the mobile robot from falling into the cliff area and damaging the mobile robot, the mobile robot needs to detect whether the cliff area exists in the active area which is reached when the mobile robot moves in the current moving direction.
The active area is: the area where the mobile robot is located during the movement. For example, when the mobile robot moves on the ground, the active area is the ground area; for another example, when the mobile robot moves on the desktop, the active area is the desktop area.
In this way, the active area that will be reached when moving in the current movement direction can be referred to as the target active area. For example, when the mobile robot moves forward, the active area that is located in front of the mobile robot and to which the mobile robot is to move may be referred to as a target active area; for another example, when the mobile robot moves backward, an active area located behind the mobile robot and to which the mobile robot is to move may be referred to as a target active area.
In order to prevent the mobile robot from falling into the cliff area, causing damage to the mobile robot, it is necessary to detect whether the cliff area exists in the target activity area. Based on this, a target image including a target active area of the mobile robot may be acquired first.
The obtained target image is a two-dimensional image, which is a plane image containing no depth information, and for example, the target image may be a YUV image collected by an image collection device mounted on a mobile robot. Wherein Y represents brightness (Luminance, luma), U, V represents chromaticity and density (Chrominance, chroma).
Therefore, in the cliff region detection method provided by the embodiment of the invention, the cliff region detection can be performed by directly using the color information of the target image without using the depth information of the image.
In addition, the embodiment of the present invention is not limited to the specific implementation manner of step S101.
Optionally, an execution main body of the cliff detection method provided by the embodiment of the present invention is: the mobile robot may be configured such that the target image is acquired by an image acquisition device mounted on the mobile robot.
Optionally, an execution main body of the cliff detection method provided by the embodiment of the present invention is: and other electronic devices capable of communicating with the mobile robot, the image acquisition device mounted on the mobile robot may transmit the acquired target image to the other electronic devices after acquiring the target image.
S102: determining image features of a target image;
S103: based on the image features, a detection result as to whether a cliff region exists in the target active region is determined.
After the target image is acquired, the image characteristics of the target image can be determined, and further, based on the image characteristics, a detection result as to whether or not a cliff region exists in the target active region can be determined. In this way, the detection result may be indicative of whether a cliff area exists in the target activity area, and thus, movement of the mobile robot may be controlled according to the detection result to prevent the mobile robot from falling into the cliff area, causing damage to the mobile robot.
The embodiment of the present invention is not limited to the specific implementation manner of steps S102 and S103.
It should be noted that, in order to prevent the mobile robot from falling into the cliff area and causing damage to the mobile robot during the movement of the mobile robot, the target image including the target active area of the mobile robot may be continuously acquired, so that the cliff area detection method provided by the embodiment of the present invention is cyclically executed until the mobile robot stops moving.
Wherein, optionally, when the detection result is that the cliff area exists in the target activity area, the mobile robot can be controlled to move in a decelerating manner or stop moving. Correspondingly, when the detection result is that the cliff area does not exist in the target active area, the mobile robot can be controlled to continuously move according to the current moving direction and the current moving speed, or controlled to move according to a preset moving scheme.
Based on the above, by applying the scheme provided by the embodiment of the invention, the detection of whether the cliff area exists in the target active area can be realized by utilizing the acquired two-dimensional image of the target active area comprising the mobile robot, so that the movement of the mobile robot can be controlled according to the detection result, and the mobile robot is prevented from falling into the cliff area to cause damage to the mobile robot.
It will be appreciated that when there is a cliff region in the target active region, then the location of the connection between the cliff region and the non-cliff region in the target active region may be mapped to one edge line in the target image, and the image regions on both sides of the edge line correspond to different active regions in the scene in which the robot is located, i.e. the image region on one side of the edge line corresponds to the cliff region in the scene in which the robot is located, and the image region on the other side of the edge line corresponds to the non-cliff region in the scene in which the robot is located.
The image features of the image areas on the two sides of the edge can be different and have large differences because the image areas on the two sides of the edge correspond to different active areas in the scene where the robot is located. For example, the color information of the image areas on both sides of the edge is different and has a large difference, and for example, the texture information of the image areas on both sides of the edge is different and has a large difference, and the like.
For example, when the mobile robot moves on the table surface, the projection of the edge of the table surface on the ground is a straight line when the table surface and the ground on which the table surface is located are photographed, and when the table surface and the ground are simultaneously included in the photographed image, the resulting straight line of the projection of the edge of the table surface on the ground may correspond to one edge line in the photographed image, and the image area on one side of the edge line corresponds to the table surface, and the image area on the other side of the edge line corresponds to the ground. Further, since there is a large difference between the table top and the ground, the image information of the image areas on both sides of the edge line is different and has a large difference.
Based on this, it is possible to detect whether or not a cliff region exists in the target active region by utilizing the difference in image information of the image regions on both sides of the edge line in the target image.
Optionally, in a specific implementation manner, as shown in fig. 2, the step S102, determining the image feature of the target image may include the following steps S102A-S102B.
S102A: determining the boundary line of each item mark in the target image;
S102B: for each target edge line, image information of a specified type in image areas on both sides of the target edge line is extracted as a set of image features of the target image.
In this particular implementation, each item in the target image may first be identified as a target edge line.
The target edge lines of each item can be determined by extracting edges of the target image by using a preset edge extraction algorithm. The edge extraction algorithm may be various algorithms such as a canny edge detection algorithm and a sobel edge detection algorithm, and the embodiment of the present invention is not limited in detail.
After the target edge lines of each item are determined, the image information of the appointed type in the image areas at the two sides of the target edge line can be respectively extracted for each target edge line, so that the obtained two image information are used as a group of image features of the target image. Thus, for each item target edge line, each set of image features of the target image can be obtained.
The above specified type of image information may be: various kinds of image information such as color information of an image and texture information of the image can be obtained based on a two-dimensional target image.
For example, for each target edge line, RGB clustering may be performed on the image areas on both sides of the target edge line, and the obtained clustering result may be used as image information. Wherein R is an abbreviation for Red (Red), G is an abbreviation for Green (Green), and B is an abbreviation for Blue (Blue).
For another example, for each target edge line, HSV (Value) spatial clustering may be performed on image areas on both sides of the target edge line, and the obtained clustering result may be used as image information. Wherein H represents hue, S represents hugging degree, and V represents brightness.
In the above two examples, various clustering algorithms may be used to perform RGB clustering or HSV spatial clustering, which is not specifically limited in the embodiments of the present invention. For example, kmeans (partition-based) clustering algorithm or the like may be employed.
Alternatively, in the above-described step S102B, for each target edge line, sub-image areas conforming to a specified size and a specified shape may be determined in the image areas on both sides of the target edge line, so that the image information of the determined two sub-image areas is extracted, respectively, as a set of image features of the target image.
Accordingly, in an alternative embodiment, as shown in fig. 3, the determining, based on the image features, whether the cliff area exists in the target active area in the step S103 may include the following steps S103A-S103B.
S103A: judging whether at least one group of target image features exist in each group of image features; if yes, go to step S103B;
wherein the difference of the two image information included in each group of target image features meets a preset difference condition;
S103B: and determining that the detection result is that a cliff area exists in the target activity area.
Since the image features of the image areas on both sides of the edge line may be different and have a large difference when the image areas on both sides of the edge line correspond to different active areas in the scene in which the robot is located, when the image information of the designated type in the image areas on both sides of the edge line of a certain item of object has a large difference, the image areas on both sides of the edge line of the object may be considered to correspond to the cliff area and the non-cliff area in the target active area, respectively, and further, it is determined that the cliff area exists in the target active area as a detection result.
Optionally, when the image information of the designated type in the image areas at both sides of the target edge line is extracted by a clustering algorithm, and when the image information of the designated type in the image areas at both sides of the target edge line has a large difference, it may be determined that the image areas at both sides of the target edge line are different label areas of the cluster, so as to determine that the target edge line is true, that is, the target edge line is an edge line of the image area corresponding to the cliff area in the image. Thus, the existence of cliff areas in the target activity area can be determined.
Based on this, after each group of image features of the target image is obtained, it is determined for each group of image features whether the difference between two image information included in the group of image features satisfies a preset difference condition, so that if so, the group of image features can be determined as the target image feature. Further, when at least one set of target image features exists in each set of image features of the target image, it may be explained that at least one target edge line exists, and the specified types of image information in the image areas on both sides have a large difference, so that it may be determined that a cliff area exists in the target active area, that is, it may be determined whether the cliff area exists in the target active area or not as a result of detection: cliff areas exist in the target active area.
Optionally, in a specific implementation manner, as shown in fig. 4, the step S102A described above, determining the target edge line of each item in the target image may include the following steps S301 to S302.
S301: performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
s302: determining each edge line meeting the first preset condition and/or the second preset condition in each initial edge line as each item mark edge line in the target image;
the first preset condition is as follows: belongs to a straight line; the second preset condition is: is located within the image area of the target active area in the target image.
In this specific implementation manner, a preset edge extraction manner may be used to perform edge extraction on the target image, so as to obtain each initial edge line in the target image.
Wherein various grain patterns may exist due to the target active area, for example, a wood floor having a pattern is laid on the ground, etc.; therefore, the edge extraction is performed on the target image by using the preset edge extraction mode, and edge lines belonging to straight lines, curves, broken lines and other various different shapes can exist in each initial edge line in the obtained target image, and the edge line mapped by the connection position between the cliff region and the non-cliff region in the target active region is a straight line in the target image, so that the extraction of the image information of the specified type can be omitted for each initial edge line belonging to the non-straight line.
Further, since the image including the non-target active region in the target image, for example, when the mobile robot moves forward, the image region of the active region on the left and right sides of the mobile robot may be included in the target image, and the respective initial edge lines in the image of the non-target active region are not effective for detecting whether the cliff region exists in the target active region, it is unnecessary to extract the image information of the specified type for the respective initial edge lines in the image of the non-target active region.
Based on the above, in order to simplify the detection process and reduce the calculation amount in the detection process, after the edge extraction is performed on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image, the obtained initial edge lines can be screened to obtain each item of target edge line in the target image.
That is, each of the obtained initial edge lines satisfying the first preset condition and/or the second preset condition may be determined as each item target edge line in the target image.
Each edge line belonging to a straight line in each obtained initial edge line can be determined as each item mark edge line in the target image; each obtained edge line in the image area of the target active area in the target image in each initial edge line can be determined to be each target edge line in the target image; and determining each obtained edge line which belongs to a straight line in each initial edge line and is positioned in the image area of the target active area in the target image as each target edge line in the target image.
In addition, optionally, in a specific implementation manner, each initial edge line obtained in the step S301 may be directly determined as the target edge line.
In many cases, the same edge line mapped by the connection position between the different types of regions in the target active region in the target image may be extracted as a plurality of discontinuous edge lines due to image instructions of the target image, extraction accuracy of a preset edge extraction algorithm, and the like.
That is, after the edge extraction is performed on the target image by using the preset edge extraction method, each initial edge line in the target image is obtained, and then, a plurality of edge lines belonging to the same edge line may exist in each obtained initial edge line. Further, a plurality of edge lines belonging to the same edge line may be present in each item label edge line obtained by the screening.
In this way, if the image information of the specified type in the image area on both sides of each target edge line is extracted separately for each target edge line, a large number of wasteful calculation amounts may be increased.
Based on this, in an alternative to the specific implementation shown in fig. 4, in a specific implementation, as shown in fig. 5, the determining, in step S102A, each item target edge line in the target image may include the following step S303.
S303: connecting edge lines meeting preset connection conditions in the initial edge lines to obtain communication edge lines;
Accordingly, in this embodiment, in step S302, each edge line satisfying the first preset condition and/or the second preset condition in each initial edge line is determined as each target edge line in the target image, and the following step S302A may be included.
S302A: and determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
In this embodiment, after performing edge extraction on the target image by using a preset edge extraction manner, after obtaining each initial edge line in the target image, the edge lines meeting the preset connection conditions in each initial edge line may be first connected by the connected domain processing, so as to obtain each connected edge line after the connected domain processing.
Thus, each edge line meeting the first preset condition and/or the second preset condition in each connected edge line can be determined as each item target edge line in the target image.
In addition, optionally, in a specific implementation manner, each of the obtained connected edge lines in the step S303 may be directly determined as the target edge line.
Since the image including the non-target active region in the target image, for example, the image regions of the active region on the left and right sides of the mobile robot may be included in the target image when the mobile robot moves forward, and each of the initial edge lines in the image of the non-target active region has no effect on detecting whether the cliff region exists in the target active region, it is unnecessary to perform edge detection on the image of the non-target active region.
Based on this, in another specific implementation manner, the step S102A may further include the following step A1.
Step A1: determining an image area of a target active area in a target image;
correspondingly, in this embodiment, step S301, using a preset edge extraction method, performs edge extraction on the target image to obtain each initial edge line in the target image, and may include step A2.
Step A2: and carrying out edge extraction on the determined image area by utilizing a preset edge extraction mode to obtain each initial edge line in the target image.
In this embodiment, an image area of the target active area in the target image may be first determined, and then, the determined image area may be edge-extracted by using a preset edge-extraction method, so as to obtain each initial edge line in the target image.
Wherein the image area of the target active area in the target image may be determined in a number of ways.
Optionally, determining an image area of the target active area in the target image according to the lane line of the mobile robot; the actual width of the lane line is as follows: the physical width of the mobile robot itself.
By way of example, assuming that the actual width of the lane line is 1m and the horizontal pixel width corresponding to the actual width of the lane line is 320 in the target image, the region in the target image, which is connected to the lane line and has a vertical height of 1/3 of the vertical height of the target image, may be determined as the image region of the target active region in the target image, and the horizontal pixel width is 320. As shown in fig. 12, fig. 12 is a target image, and the rectangular area in fig. 12 is the image area of the target active area in the target image.
Optionally, after the step A2, each edge line satisfying the first preset condition and/or the second preset condition in each initial edge line may be further determined as each item target edge line in the target image.
Optionally, after the step A2, edge lines meeting a preset connection condition in each initial edge line may be further connected to obtain each connected edge line, and each edge line meeting a first preset condition and/or a second preset condition in each connected edge line is determined as each item target edge line in the target image.
It can be understood that, during the movement of the mobile robot, the mobile robot can continuously collect images of the active area where the mobile robot is located, according to the optical principle of near-far and far, the change of the image position coordinates of the corresponding point in each image collected continuously at a certain point in the non-cliff area is larger, and the change of the image position coordinates of the corresponding point in each image collected continuously at a certain point in the cliff area is smaller. Therefore, whether the cliff area exists in the target active area can be judged according to the change condition of the image position coordinates of the point on the fixed object in the scene where the mobile robot is located in each continuously acquired image.
Based on this, in an optional implementation, as shown in fig. 6, the determining the image feature of the target image in step S102 may include the following step S1021.
S1021: identifying each characteristic point in the target image to obtain each image characteristic of the target image;
wherein, each characteristic point is: points on a fixed object in a scene where the mobile robot is located are corresponding points in the target image.
In this embodiment, since the position coordinates of the fixed object in the scene where the mobile robot is located in space are unchanged, points corresponding to points on the fixed object in the scene where the mobile robot is located in the target image may be used as feature points, so that each feature point in the target image may be identified, and each image feature of the target image may be obtained.
Optionally, each feature point may be various feature points identified in the target image. For example SIFT (Scale-INVARIANT FEATURE TRANSFORM, size invariant feature transform) corner points, ORB (Object Request Broker, object request hosting) corner points, SURF (Speeded Up Robust Features, acceleration robust feature) corner points, and the like.
Further, based on the specific implementation manner shown in fig. 6, in an optional specific implementation manner, the determining the image feature of the target image in step S102 may further include the following step B1.
Step B1: each item in the target image is identified as a target edge line.
Accordingly, in this embodiment, the step S1021 identifies each feature point in the target image, and obtains each image feature of the target image, which may include the following step B2.
Step B2: and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
In this embodiment, in order to simplify the detection process and reduce the calculation amount in the detection process, each item of target edge line in the target image may be determined first, and then, each feature point on both sides of each item of target edge line in the target image is identified, so as to obtain each image feature of the target image.
Alternatively, for each target edge line, sub-image areas conforming to a specified size and a specified shape may be determined in the image areas on both sides of the target edge line, so that each feature point in the determined two sub-image areas is respectively identified, and each image feature of the target image is obtained.
The specific implementation manner of the step B1 is the same as that of the step S102A, and will not be described herein.
Accordingly, in an alternative embodiment, as shown in fig. 7, the determining, based on the image features, whether the cliff area exists in the target active area in step S103 may include the following steps S1031-S1033.
S1031: acquiring a previous frame image of a target image;
wherein, last frame image is: a two-dimensional image having a collection time preceding a collection time of the target image;
In this embodiment, after each image feature of the target image is determined, a two-dimensional image with a collection time before the collection time of the target image may be first obtained as a previous frame image of the target image.
In the cliff area detection method provided by the embodiment of the present invention, since the target image including the target active area of the mobile robot may be continuously acquired during the movement process of the mobile robot, the cliff area detection method provided by the embodiment of the present invention is circularly executed until the mobile robot stops moving, so in this specific implementation manner, when the previous frame image of the target image is acquired, the previous frame image of the previous frame image is combined with the previous frame image, so as to perform detection about whether the cliff area exists in the target active area. Wherein, since the mobile robot is in a moving state between the previous frame image from which the previous frame image was acquired, the previous frame image of the previous frame image and the target active area included in the previous frame image are different, and the target active area included in the previous frame image of the target image is also different.
Optionally, the last frame of image of the target image is: the acquisition time is before the acquisition time of the target image and is the two-dimensional image closest to the acquisition time of the target image.
S1032: for each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the corresponding moving distance of the feature point when the target point matched with the feature point exists in the previous frame image;
wherein, the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
for each feature point, it may be first determined whether a target point matching the feature point exists in the previous frame of the target image, that is, whether a point exists in the previous frame of the target image, where the point is the same as the point corresponding to the feature point in the scene where the mobile robot is located.
For example, if a certain characteristic point is a certain table leg of a certain fixed table in a scene where the mobile robot is located, it can be determined whether a point corresponding to the table leg exists in a previous frame of image of the target image.
In this way, for each feature point, when there is a target point matching the feature point in the previous frame image, the distance between the image position coordinates of the feature point and the image position coordinates of the matching target point is calculated as the movement distance corresponding to the feature point.
Optionally, the following formula may be used to calculate the movement distance corresponding to each feature point, where the formula is:
wherein D i is the movement distance corresponding to the I-th feature point, (I i,Ji) is the image position coordinate of the I-th specific point, and (i_ref i,J_refi) is the image position coordinate of the target point matching the I-th specific corner point.
Further, optionally, for each feature point, the manner of determining whether or not there is a target point matching the feature point in the previous frame image may include the following step D.
Step D: for each feature point, a window region of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and it is determined whether or not there is a target point matching the feature point in the determined window region.
For each feature point, a window region of a preset size centered on the image position coordinates of the feature point may be determined in the previous frame image of the target image, so that it is judged whether or not there is a target point matching the feature point in the determined window region.
For example, for each feature point, a rectangular window region with a width and a height of-128 pixels and +128 pixels, respectively, centered on the image position coordinates of the feature point may be determined in the previous frame image of the target image.
Wherein if the determined window area has a target point matched with the feature point, it is determined that the target point matched with the feature point exists in the previous frame of the target image; otherwise, it may be determined that the target point matching the feature point does not exist in the previous frame image of the target image.
S1033: based on the calculated respective movement distances, a detection result as to whether or not a cliff region exists in the target activity region is determined.
After the above-described respective movement distances are calculated, a detection result as to whether or not a cliff area exists in the target activity area may be determined based on the calculated respective movement distances.
Optionally, if only one movement distance is calculated, the magnitude relation between the movement distance and the preset threshold may be determined, so that if the movement distance is smaller than the preset threshold, it may be determined that the cliff area exists in the target activity area as a detection result. Otherwise, it may be determined that the detection result is that the cliff area does not exist in the target active area.
Alternatively, when the detection result is that the cliff area does not exist in the target activity area, the mobile robot may be controlled to stop moving.
Alternatively, when the detection result is that the cliff area does not exist in the target activity area, the mobile robot may be controlled to move at a reduced speed. Wherein the target image may be stored when the mobile robot is controlled to move at a reduced speed. At this time, when a new target image is acquired again, the stored target image is used as the previous frame image of the new target image to determine whether a cliff area exists in the new target active area of the mobile robot after the mobile robot moves at a reduced speed.
Optionally, the step S1033 may include the following step E1.
Step E1: if the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that the detection result is that a cliff area exists in the target active area;
The preset threshold value is matched with the current moving speed of the mobile robot.
And respectively judging the magnitude relation between the moving distance and a preset threshold value according to the calculated moving distances, thereby determining the number of the moving distances smaller than the preset threshold value, and calculating the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances. Further, when the ratio is greater than the preset ratio, it may be determined that the detection result is that the cliff region exists in the target active region. Otherwise, it may be determined that the detection result is that the cliff area does not exist in the target active area.
When the detection result indicates that the cliff area does not exist in the target active area, the robot can be controlled to move at a reduced speed, and then, as the current moving speed of the mobile robot is reduced, the change of the image position coordinates of the corresponding point in each continuously acquired image at a certain point in the scene where the mobile robot is located is reduced, so that in order to ensure the accuracy of the detection result, when the mobile robot is controlled to move at a reduced speed, the preset threshold value can be reduced according to the current moving speed of the mobile robot after the speed reduction.
That is, the preset threshold is matched with the current moving speed of the mobile robot.
Optionally, the step 1033 may include the following step E2.
Step E2: if the target distance is smaller than the preset threshold value, determining that the detection result is that a cliff area exists in the target active area; wherein, the target distance includes: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
When the calculated number of the moving distances is a plurality of, an average value of the moving distances can be calculated and used as a target distance, and then the size relation between the target distance and a preset threshold value is judged, so that if the target distance is smaller than the preset threshold value, the detection result can be determined as that a cliff area exists in the target active area. Otherwise, it may be determined that the detection result is that the cliff area does not exist in the target active area.
When the number of calculated respective moving distances is plural, then the number of image features of the determined target image is plural, and each image feature is one feature point identified in the target image. According to the experience value of the technician and various reasons such as the effect of each feature point on the detection result of whether the cliff area exists in the target activity area, a weight is set for each feature point, and the weight is the calculated weight of the movement distance corresponding to the feature point.
Thus, after each moving distance is calculated, the product of each moving distance and the weight of the moving distance can be calculated, so that the weighted value of each moving distance is obtained, and then the sum of the obtained weighted values is calculated as the target distance. Further, the size relation between the target distance and the preset threshold is judged, so that if the target distance is smaller than the preset threshold, the detection result can be determined as that the cliff area exists in the target active area. Otherwise, it may be determined that the detection result is that the cliff area does not exist in the target active area.
Corresponding to the cliff area detection method, the embodiment of the invention also provides a mobile robot control method.
The method can be applied to various moving scenes where various moving robots are located, for example, when the sweeping robot cleans beside stairs, and for example, when the tour guide robot moves in an environment where steps exist. Moreover, the method can be applied to the mobile robot and can also be applied to other electronic equipment capable of communicating with the mobile robot, for example, a management server of the mobile robot, which is reasonable.
Fig. 8 is a diagram of a mobile robot control method according to an embodiment of the present invention, as shown in fig. 8, the method may include the following steps:
s801: acquiring a detection result regarding whether a cliff area exists in a target activity area of the mobile robot;
Wherein, the target activity area is: when the mobile robot moves according to the current moving direction, the detection result of the moving area is determined by using any cliff detection method provided by the embodiment of the invention.
S802: and controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
By means of any cliff detection method provided by the embodiment of the invention, a detection result about whether the cliff area exists in the target activity area of the mobile robot can be obtained, and therefore, when the detection result represents that the cliff area exists in the target activity area, the mobile robot can be controlled to move in a decelerating manner or stop moving.
Optionally, if the cliff detection method and the mobile robot control method provided by the embodiments of the present invention are both applied to a mobile robot, the mobile robot may directly control its movement according to the detection result after determining whether the detection result of the cliff area exists in the target activity area of the mobile robot.
Optionally, if the cliff detection method and the mobile robot control method provided by the embodiments of the present invention are both applied to other electronic devices capable of communicating with the mobile robot, after determining whether a detection result of the cliff area exists in the target activity area of the mobile robot, the other electronic devices may send a movement control instruction to the mobile robot according to the detection result, so as to control the movement situation of the mobile robot.
Optionally, if the cliff detection method provided by the embodiment of the present invention is applied to a mobile robot, and a mobile robot control method is applied to other electronic devices capable of communicating with the mobile robot, after determining whether a detection result of the cliff area exists in a target activity area of the mobile robot, the mobile robot may send the detection result to the other electronic devices, and further, the other electronic devices may send a movement control instruction to the mobile robot according to the detection result, so as to control a movement situation of the mobile robot.
Optionally, if the cliff detection method provided by the embodiment of the present invention is applied to other electronic devices capable of communicating with a mobile robot, and the mobile robot control method is applied to the mobile robot, after determining whether a detection result of the cliff area exists in a target activity area of the mobile robot, the other electronic devices may send the detection result to the mobile robot, and further, the mobile robot may directly control its movement condition according to the detection result.
Corresponding to the cliff area detection method provided in the above embodiment of the present invention, the embodiment of the present invention provides a cliff area detection device.
Fig. 9 is a schematic structural diagram of a cliff area detection device according to an embodiment of the present invention, where, as shown in fig. 9, the device may include the following modules:
An image acquisition module 910 for acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, the target active area being: the mobile robot moves according to the current moving direction to reach an active area;
An image processing module 920, configured to determine an image feature of the target image, and determine, based on the image feature, a detection result regarding whether a cliff region exists in the target active area.
As can be seen from the above, when the solution provided by the embodiment of the present invention is applied to detect whether a cliff area exists in a target activity area that is to be achieved when the mobile robot moves in the current movement direction during the movement of the mobile robot, a target image including the target activity area of the mobile robot may be first acquired, and further, the image characteristics of the target image may be determined, so that whether the cliff area exists in the target activity area may be detected by using the determined image characteristics, so as to determine a detection result about whether the cliff area exists in the target activity area. Thus, after the detection result is obtained, the movement of the mobile robot can be controlled based on the detection result.
Based on the above, by applying the scheme provided by the embodiment of the invention, the detection of whether the cliff area exists in the target active area can be realized by utilizing the acquired two-dimensional image of the target active area comprising the mobile robot, so that the movement of the mobile robot can be controlled according to the detection result, and the mobile robot is prevented from falling into the cliff area to cause damage to the mobile robot.
Optionally, in a specific implementation manner, the image processing module 920 determines an image feature of the target image, including:
Determining each item mark edge line in the target image;
And respectively extracting the image information of the appointed type in the image areas at the two sides of the target edge line as a group of image features of the target image aiming at each target edge line.
Optionally, in a specific implementation manner, the determining, by the image processing module 920, a detection result regarding whether the cliff area exists in the target active area based on the image feature includes:
Judging whether at least one group of target image features exist in each group of image features; wherein the difference of the two image information included in each group of target image features meets a preset difference condition;
if so, determining that the cliff area exists in the target activity area as a detection result.
Optionally, in a specific implementation manner, the image processing module 920 determines an image feature of the target image, including:
Identifying each characteristic point in the target image to obtain each image characteristic of the target image;
Wherein, each characteristic point is: and the point on the fixed object in the scene where the mobile robot is located is a corresponding point in the target image.
Optionally, in a specific implementation manner, the image processing module 920 is further configured to:
Determining each item mark edge line in the target image before identifying each feature point in the target image;
The image processing module 920 identifies each feature point in the target image, and obtains each image feature of the target image, including:
and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
Optionally, in a specific implementation manner, the determining, by the image processing module 920, a detection result regarding whether the cliff area exists in the target active area based on the image feature includes:
acquiring a previous frame of image of the target image; wherein, last frame image is: a two-dimensional image having a collection time preceding the collection time of the target image;
For each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the movement distance corresponding to the feature point under the condition that the target point matched with the feature point exists in the previous frame image; wherein the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
based on the calculated respective movement distances, a detection result as to whether or not a cliff region exists in the target activity region is determined.
Optionally, in a specific implementation manner, the image processing module 920 is further configured to:
For each feature point, a window area of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and whether a target point matched with the feature point exists in the determined window area is judged.
Optionally, in a specific implementation manner, the determining, by the image processing module 920, a detection result regarding whether the cliff area exists in the target active area based on the calculated respective movement distances includes:
if the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that a cliff area exists in the target activity area as a detection result; wherein the preset threshold value is matched with the current moving speed of the mobile robot; or alternatively
If the target distance is smaller than the preset threshold value, determining that a cliff area exists in the target activity area as a detection result; wherein the target distance comprises: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
Optionally, in a specific implementation manner, the determining, by the image processing module 920, each target edge line in the target image includes:
performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
Determining each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines as each item of target edge line in the target image;
wherein, the first preset condition is: belongs to a straight line; the second preset condition is: is positioned in an image area of the target active area in the target image.
Optionally, in a specific implementation manner, the image processing module 920 is further configured to:
Before each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines is determined to be each target edge line of the target image, connecting the edge lines meeting the preset connection condition in the initial edge lines to obtain each communication edge line;
the image processing module 920 determines each edge line satisfying the first preset condition and/or the second preset condition from the initial edge lines as each item target edge line in the target image, including:
And determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
Optionally, in a specific implementation manner, the image processing module 920 is further configured to:
before the edge extraction is carried out on the target image by utilizing a preset edge extraction mode to obtain each initial edge line in the target image, determining an image area of the target active area in the target image;
The image processing module 920 performs edge extraction on the target image by using a preset edge extraction method to obtain each initial edge line in the target image, where the initial edge line includes:
And extracting the edges of the determined image area by using a preset edge extraction mode to obtain each initial edge line in the target image.
Corresponding to the mobile robot control method provided by the embodiment of the invention, the embodiment of the invention also provides a mobile robot control device.
Fig. 10 is a schematic diagram of the result of a mobile robot control device according to an embodiment of the present invention, as shown in fig. 10, the device may include the following modules:
A result acquisition module 1010 for acquiring a detection result regarding whether or not a cliff area exists in a target active area of the mobile robot; wherein the target activity area is: the mobile robot moves according to the current moving direction, and the detection result is determined by using any cliff area detection device provided by the embodiment of the invention;
And the movement control module 1020 is used for controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
Corresponding to the cliff area detection method and the mobile robot control method provided in the embodiments of the present invention described above, the embodiments of the present invention also provide a mobile robot, as shown in fig. 11, including a processor 1101, a communication interface 1102, a memory 1103 and a communication bus 1104, where the processor 1101, the communication interface 1102 and the memory 1103 complete communication with each other through the communication bus 1104,
A memory 1103 for storing a computer program;
The processor 1101 is configured to implement the steps of any one of the cliff area detection methods provided in the embodiments of the present invention and/or the steps of a mobile robot control method provided in the embodiments of the present invention when executing a program stored in the memory 1103.
The communication bus mentioned above for the mobile robot may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the mobile robot and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In still another embodiment of the present invention, a computer readable storage medium is provided, where a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the cliff area detection methods provided in the embodiments of the present invention and/or the steps of a mobile robot control method provided in the embodiments of the present invention.
In yet another embodiment of the present invention, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform the steps of any one of the cliff area detection methods provided by the embodiments of the present invention described above, and/or the steps of a mobile robot control method provided by the embodiments of the present invention described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, the mobile robot embodiments, the computer-readable storage medium embodiments, and the computer program product embodiments, the description is relatively simple, as relevant to the description of the method embodiments in part, since they are substantially similar to the method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (18)

1. A cliff area detection method, the method comprising:
Acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, which is a planar image that does not contain depth information; the target image includes: YUV images, or RGB images; the target activity area is: the mobile robot moves according to the current moving direction to reach an active area;
Determining image features of the target image; the image features are image information obtained by performing image recognition on the target image; the image information includes: color information of the image and/or texture information of the image; the image features are determined based on the recognition results of the feature points in the target image;
Acquiring a previous frame of image of the target image; wherein, last frame image is: a two-dimensional image having a collection time preceding the collection time of the target image; each feature point is: points on a fixed object in a scene where the mobile robot is located correspond to points in the target image;
for each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the movement distance corresponding to the feature point under the condition that the target point matched with the feature point exists in the previous frame image; the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
if the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that a cliff area exists in the target activity area as a detection result; wherein the preset threshold value is matched with the current moving speed of the mobile robot; or alternatively
If the target distance is smaller than the preset threshold value, determining that a cliff area exists in the target activity area as a detection result; wherein the target distance comprises: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
2. The method of claim 1, wherein the step of determining the image characteristics of the target image comprises:
and identifying each characteristic point in the target image to obtain each image characteristic of the target image.
3. The method of claim 2, wherein prior to the step of identifying individual feature points in the target image, the method further comprises:
Determining each item mark edge line in the target image;
the step of identifying each feature point in the target image to obtain each image feature of the target image comprises the following steps:
and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
4. The method according to claim 1, wherein determining, for each feature point, whether there is a target point in the previous frame image that matches the feature point, comprises:
For each feature point, a window area of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and whether a target point matched with the feature point exists in the determined window area is judged.
5. A method according to claim 3, wherein the step of determining each item target edge line in the target image comprises:
performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
Determining each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines as each item of target edge line in the target image;
wherein, the first preset condition is: belongs to a straight line; the second preset condition is: is positioned in an image area of the target active area in the target image.
6. The method according to claim 5, wherein before the step of determining each of the initial edge lines satisfying a first preset condition and/or a second preset condition as each target edge line of the target image, the method further comprises:
Connecting edge lines meeting preset connection conditions in the initial edge lines to obtain communication edge lines;
the step of determining each edge line satisfying the first preset condition and/or the second preset condition in each initial edge line as each target edge line in the target image includes:
And determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
7. The method of claim 5, wherein prior to the step of edge extracting the target image using a predetermined edge extraction method to obtain each initial edge line in the target image, the method further comprises:
Determining an image area of the target active area in the target image;
The step of extracting the edge of the target image by using a preset edge extraction mode to obtain each initial edge line in the target image comprises the following steps:
And extracting the edges of the determined image area by using a preset edge extraction mode to obtain each initial edge line in the target image.
8. A mobile robot control method, the method comprising:
Acquiring a detection result regarding whether a cliff area exists in a target activity area of the mobile robot; wherein the target activity area is: the mobile robot moves according to the current moving direction, the moving area is reached, and the detection result is determined by the cliff area detection method according to any one of claims 1-7;
And controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
9. A cliff area detection device, the device comprising:
an image acquisition module for acquiring a target image including a target active area of the mobile robot; wherein, the target image is: a two-dimensional image, which is a planar image that does not contain depth information; the target image includes: YUV images, or RGB images; the target activity area is: the mobile robot moves according to the current moving direction to reach an active area;
An image processing module for determining image characteristics of the target image; the image features are image information obtained by performing image recognition on the target image; the image information includes: color information of the image and/or texture information of the image; the image features are determined based on the recognition results of the feature points in the target image;
Acquiring a previous frame of image of the target image; wherein, last frame image is: a two-dimensional image having a collection time preceding the collection time of the target image; each feature point is: points on a fixed object in a scene where the mobile robot is located correspond to points in the target image;
for each feature point, calculating the distance between the image position coordinates of the feature point and the image position coordinates of the matched target point as the movement distance corresponding to the feature point under the condition that the target point matched with the feature point exists in the previous frame image; the characteristic point and the target point matched with the characteristic point correspond to the same point in the scene where the mobile robot is located;
If the ratio of the number of the moving distances smaller than the preset threshold value in the calculated moving distances is larger than the preset ratio, determining that a cliff area exists in the target activity area as a detection result; wherein the preset threshold value is matched with the current moving speed of the mobile robot; or if the target distance is smaller than the preset threshold value, determining that a cliff area exists in the target activity area as a detection result; wherein the target distance comprises: the calculated average value of the respective moving distances or the calculated sum of the weighted values of each moving distance.
10. The apparatus of claim 9, wherein the image processing module determining image features of the target image comprises:
and identifying each characteristic point in the target image to obtain each image characteristic of the target image.
11. The apparatus of claim 10, wherein the image processing module is further configured to:
Determining each item mark edge line in the target image before identifying each feature point in the target image;
the image processing module identifies each feature point in the target image to obtain each image feature of the target image, and the image processing module comprises the following steps:
and identifying each characteristic point at two sides of each item mark edge line of the target image to obtain each image characteristic of the target image.
12. The apparatus of claim 9, wherein the image processing module is further configured to:
For each feature point, a window area of a preset size centered on the image position coordinates of the feature point is determined in the previous frame image, and whether a target point matched with the feature point exists in the determined window area is judged.
13. The apparatus of claim 11, wherein the image processing module determining each item target edge line in the target image comprises:
performing edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image;
Determining each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines as each item of target edge line in the target image;
wherein, the first preset condition is: belongs to a straight line; the second preset condition is: is positioned in an image area of the target active area in the target image.
14. The apparatus of claim 13, wherein the image processing module is further configured to:
Before each edge line meeting a first preset condition and/or a second preset condition in the initial edge lines is determined to be each target edge line of the target image, connecting the edge lines meeting the preset connection condition in the initial edge lines to obtain each communication edge line;
The image processing module determines each edge line meeting a first preset condition and/or a second preset condition in each initial edge line as each item target edge line in the target image, and the image processing module comprises:
And determining each edge line meeting the first preset condition and/or the second preset condition in each connected edge line as each item mark edge line in the target image.
15. The apparatus of claim 14, wherein the image processing module is further configured to:
before the edge extraction is carried out on the target image by utilizing a preset edge extraction mode to obtain each initial edge line in the target image, determining an image area of the target active area in the target image;
The image processing module performs edge extraction on the target image by using a preset edge extraction mode to obtain each initial edge line in the target image, and the method comprises the following steps:
And extracting the edges of the determined image area by using a preset edge extraction mode to obtain each initial edge line in the target image.
16. A mobile robot control device, the device comprising:
a result acquisition module for acquiring a detection result regarding whether a cliff area exists in a target active area of the mobile robot; wherein the target activity area is: the mobile robot moving according to the current moving direction, the moving area to be reached, the detection result being determined by the cliff area detection device according to any one of claims 1 to 7;
And the movement control module is used for controlling the mobile robot to move in a decelerating way or stop moving when the detection result indicates that the cliff area exists in the target activity area.
17. The mobile robot is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-7 and/or the method steps of claim 8 when executing a program stored on a memory.
18. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any one of claims 1-7 and/or the method steps of claim 8.
CN202110660296.XA 2021-06-15 2021-06-15 Cliff area detection and mobile robot control method and device and mobile robot Active CN113313052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110660296.XA CN113313052B (en) 2021-06-15 2021-06-15 Cliff area detection and mobile robot control method and device and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110660296.XA CN113313052B (en) 2021-06-15 2021-06-15 Cliff area detection and mobile robot control method and device and mobile robot

Publications (2)

Publication Number Publication Date
CN113313052A CN113313052A (en) 2021-08-27
CN113313052B true CN113313052B (en) 2024-05-03

Family

ID=77378970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110660296.XA Active CN113313052B (en) 2021-06-15 2021-06-15 Cliff area detection and mobile robot control method and device and mobile robot

Country Status (1)

Country Link
CN (1) CN113313052B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486811A (en) * 2021-07-08 2021-10-08 杭州萤石软件有限公司 Cliff detection method and device, electronic equipment and computer readable storage medium
CN114663316B (en) * 2022-05-17 2022-11-04 深圳市普渡科技有限公司 Method for determining edgewise path, mobile device and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504684A (en) * 2014-12-03 2015-04-08 小米科技有限责任公司 Edge extraction method and device
CN105091782A (en) * 2015-05-29 2015-11-25 南京邮电大学 Multilane laser light plane calibration method based on binocular vision
CN106251332A (en) * 2016-07-17 2016-12-21 西安电子科技大学 SAR image airport target detection method based on edge feature
CN108553027A (en) * 2018-01-04 2018-09-21 深圳悉罗机器人有限公司 Mobile robot
CN110852312A (en) * 2020-01-14 2020-02-28 深圳飞科机器人有限公司 Cliff detection method, mobile robot control method, and mobile robot
CN111652897A (en) * 2020-06-10 2020-09-11 北京云迹科技有限公司 Edge positioning method and device based on robot vision

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3660492B2 (en) * 1998-01-27 2005-06-15 株式会社東芝 Object detection device
CN104299244B (en) * 2014-09-26 2017-07-25 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
CN107454969B (en) * 2016-12-19 2019-10-29 深圳前海达闼云端智能科技有限公司 Obstacle detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504684A (en) * 2014-12-03 2015-04-08 小米科技有限责任公司 Edge extraction method and device
CN105091782A (en) * 2015-05-29 2015-11-25 南京邮电大学 Multilane laser light plane calibration method based on binocular vision
CN106251332A (en) * 2016-07-17 2016-12-21 西安电子科技大学 SAR image airport target detection method based on edge feature
CN108553027A (en) * 2018-01-04 2018-09-21 深圳悉罗机器人有限公司 Mobile robot
CN110852312A (en) * 2020-01-14 2020-02-28 深圳飞科机器人有限公司 Cliff detection method, mobile robot control method, and mobile robot
CN111652897A (en) * 2020-06-10 2020-09-11 北京云迹科技有限公司 Edge positioning method and device based on robot vision

Also Published As

Publication number Publication date
CN113313052A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN113313052B (en) Cliff area detection and mobile robot control method and device and mobile robot
JP6710426B2 (en) Obstacle detection method and device
CN110919653B (en) Stair climbing control method and device for robot, storage medium and robot
CN109271937B (en) Sports ground marker identification method and system based on image processing
KR101595537B1 (en) Networked capture and 3d display of localized, segmented images
CN113781402A (en) Method and device for detecting chip surface scratch defects and computer equipment
US20140177915A1 (en) Method and apparatus for detecting object
CN107358245B (en) Method for detecting image collaborative salient region
CN110442120B (en) Method for controlling robot to move in different scenes, robot and terminal equipment
KR102257746B1 (en) Method for controlling robot group and system thereof
WO2020214084A1 (en) Method and system for detecting fire and smoke
EP2813973A1 (en) Method and system for processing video image
CN112597846B (en) Lane line detection method, lane line detection device, computer device, and storage medium
JP6288230B2 (en) Object division method and apparatus
CN105184771A (en) Adaptive moving target detection system and detection method
JP6817742B2 (en) Information processing device and its control method
EP3039645A1 (en) A semi automatic target initialization method based on visual saliency
US20210213615A1 (en) Method and system for performing image classification for object recognition
US10380743B2 (en) Object identifying apparatus
CN110207702B (en) Target positioning method and device
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
JP7153264B2 (en) Image analysis system, image analysis method and image analysis program
CN111695374B (en) Segmentation method, system, medium and device for zebra stripes in monitoring view angles
CN111429487A (en) Sticky foreground segmentation method and device for depth image
CN113673362A (en) Method and device for determining motion state of object, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant