CN117837987A - Control method, cleaning robot and storage medium - Google Patents

Control method, cleaning robot and storage medium Download PDF

Info

Publication number
CN117837987A
CN117837987A CN202410202989.8A CN202410202989A CN117837987A CN 117837987 A CN117837987 A CN 117837987A CN 202410202989 A CN202410202989 A CN 202410202989A CN 117837987 A CN117837987 A CN 117837987A
Authority
CN
China
Prior art keywords
image
cleaning robot
camera
light source
cliff
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410202989.8A
Other languages
Chinese (zh)
Inventor
陈悦
徐权
欧阳家斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huanchuang Technology Co ltd
Original Assignee
Shenzhen Huanchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huanchuang Technology Co ltd filed Critical Shenzhen Huanchuang Technology Co ltd
Priority to CN202410202989.8A priority Critical patent/CN117837987A/en
Publication of CN117837987A publication Critical patent/CN117837987A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application relates to the technical field of cleaning robots and discloses a control method, a cleaning robot and a storage medium, wherein the cleaning robot comprises a camera and a light source, the camera is arranged on a chassis of the cleaning robot, the light source is arranged near the camera, therefore, the light source can illuminate the ground in the advancing direction of the cleaning robot, and an image collected by the camera comprises an area which can be illuminated by the light source. The method comprises the following steps: acquiring a current image shot by a camera, and identifying the ground material according to the current image to obtain an identification result; cliff detection is carried out according to the current image, and a detection result is obtained. And finally, controlling the cleaning robot to operate according to the identification result and/or the detection result. In the embodiment, the environment can be accurately perceived through fewer sensors, and intelligent operation of the cleaning robot can be controlled. In addition, the sensor is two-in-one, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.

Description

Control method, cleaning robot and storage medium
Technical Field
The embodiment of the application relates to the technical field of cleaning robots, in particular to a control method, a cleaning robot and a storage medium.
Background
Cleaning robots are an important branch of the modern robot field. A cleaning robot refers to a robot capable of autonomously performing a cleaning task in an environment such as a home room, a large-sized place, or the like. Common cleaning robots include a floor sweeping robot, a floor mopping robot and a floor sweeping and mopping integrated machine.
The cleaning robot is generally provided with a plurality of sensors such as a laser radar, a cliff detection sensor, and an ultrasonic sensor. The laser radar scans the environment where the cleaning robot is located, and a map is built based on the laser scanning result. And planning a path and walking by the cleaning robot according to the established map. The cliff detection sensor is used for detecting the sinking step and helping the cleaning robot to avoid the step. The ultrasonic sensor is used for detecting the ground material (such as floor or carpet, etc.), helping the cleaning robot avoid the carpet or adopt different cleaning modes based on the ground material.
Therefore, most cleaning robots need to rely on a large number of sensors for autonomous cleaning, and the sensors have single functions, high cost, large occupied space and high data processing integration degree.
Disclosure of Invention
In view of this, some embodiments of the present application provide a control method, a cleaning robot, and a storage medium, which can accurately implement ground material recognition and cliff detection by using an integrated vision sensor (including a camera and a light source), accurately sense an environment, and facilitate controlling the intelligent operation of the cleaning robot. In addition, the two sensors corresponding to ground material identification and cliff detection are integrated, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.
In a first aspect, some embodiments of the present application provide a control method applied to a cleaning robot, the cleaning robot including a camera and a light source, the camera being disposed on a chassis of the cleaning robot, the light source being disposed near the camera, the method comprising:
acquiring a current image shot by a camera;
performing ground material identification according to the current image to obtain an identification result;
cliff detection is carried out according to the current image, and a detection result is obtained;
and controlling the cleaning robot to operate according to the identification result and/or the detection result.
In some embodiments, the cliff detection according to the image, to obtain a detection result, includes:
ranging according to the current image to obtain an initial height;
adjusting the power of the light source according to the initial height, wherein the power of the light source is positively correlated with the initial height;
after the power of the light source is regulated, a current image is obtained and updated, and distance measurement is carried out according to the current image, so that cliff height is obtained;
and determining a detection result according to the cliff height.
In some embodiments, the camera is a monocular camera, the current image includes a current frame image and a previous frame image, ranging is performed according to the current image to obtain a cliff height, including:
Dividing a current frame image and a previous frame image into a plurality of image blocks respectively;
determining a first distance according to two image blocks corresponding to the front frame and the rear frame;
the cliff height is determined based on the plurality of first distances.
In some embodiments, determining the first distance according to the two image blocks corresponding to the previous and subsequent frames includes:
extracting corresponding target points in the two image blocks;
and determining a first distance according to the image positions of the target points in the two image blocks and the camera parameters.
In some embodiments, the extracting the corresponding target point in the two image blocks includes:
identifying objects in the two image blocks and performing object matching;
if the matching object exists, taking a characteristic point of the matching object as a target point;
if no matching object exists, carrying out pixel feature recognition and pixel feature matching on the two image blocks, wherein the pixel features comprise corner features or contour features;
and if the matched pixel points exist, taking the matched pixel points as target points.
In some embodiments, the method further comprises:
if the matching pixel points do not exist, merging the two image blocks with at least one adjacent image block around the two image blocks respectively;
And performing object matching on the two combined image blocks, and performing pixel characteristic matching to extract a target point.
In some embodiments, the method further comprises:
if all the image blocks are combined and the target point is not matched and extracted, the camera is controlled to rotate by a preset angle, and then a new current frame image and a new previous frame image are obtained so as to match and extract the corresponding target point in the current frame image and the previous frame image.
In some embodiments, the method further comprises:
if the rotation times of the camera reach the preset times and the target point is not matched and extracted, cliff detection at the current position is stopped.
In some embodiments, the light source has at least two preset light shapes, the method further comprising:
the lamplight shape of the initial default light source is a first shape;
and if the confidence coefficient of the material recognition result is smaller than or equal to the first threshold value, adjusting the lamplight shape of the light source into a second shape, wherein the irradiation area of the second shape is larger than that of the first shape.
In some embodiments, the method further comprises:
identifying the length or width of the light shape in the image under the condition that the light shape of the light source is the first shape;
and adjusting the lamp light shape of the light source to the second shape when the length of the lamp light shape is smaller than or equal to the second threshold value or the width is smaller than or equal to the third threshold value.
In some embodiments, the method further comprises:
re-detecting the cliff under the condition that the cliff height in the detection result is abnormal;
if the detected cliff height is still abnormal, stopping cliff detection by the current image, and acquiring and updating the current image.
In a second aspect, some embodiments of the present application provide a cleaning robot, comprising:
the device comprises a camera and a light source, wherein the camera is arranged on a chassis of the cleaning robot, and the light source is arranged near the camera;
at least one processor which is respectively connected with the camera and the light source in a communication way;
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the control method as in the first aspect.
In a third aspect, some embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for causing a computer device to perform the control method as in the first aspect.
The beneficial effects of the embodiment of the application are that: the control method is different from the situation of the prior art, is applied to a cleaning robot, comprises a camera and a light source, wherein the camera is arranged on a chassis of the cleaning robot, the light source is arranged near the camera, accordingly, the light source can illuminate the ground in the advancing direction of the cleaning robot, and an image collected by the camera comprises an area which can be illuminated by the light source. The method comprises the following steps: acquiring a current image shot by a camera, and identifying the ground material according to the current image to obtain an identification result; cliff detection is carried out according to the current image, and a detection result is obtained. And finally, controlling the cleaning robot to operate according to the identification result and/or the detection result.
In this embodiment, the ground material recognition and cliff detection can be accurately realized by integrating the image acquisition by the integrated vision sensor including the camera and the light source, and combining the image recognition and the vision ranging. That is, through fewer sensors, the environment can be accurately perceived, and the intelligent operation of the cleaning robot can be controlled. In addition, the two sensors corresponding to ground material identification and cliff detection are integrated, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a schematic view of an application environment of a control method applied to a cleaning robot according to some embodiments of the present application;
FIG. 2 is a schematic view of a cleaning robot in some embodiments of the present application;
FIG. 3 is a flow chart of a control method applied to a cleaning robot in some embodiments of the present application;
FIG. 4 is a schematic diagram of the principle of triangulation in some embodiments of the present application;
fig. 5 is a schematic diagram of a single frame image ranging principle in some embodiments of the present application.
Detailed Description
The present application is described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the present application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the spirit of the present application. These are all within the scope of the present application.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that, if not conflicting, the various features in the embodiments of the present application may be combined with each other, which is within the protection scope of the present application. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Moreover, the words "first," "second," "third," and the like as used herein do not limit the data and order of execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application in this description is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
In addition, technical features described below in the various embodiments of the present application may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, fig. 1 is a schematic view of an application environment of a control method according to an embodiment of the present application. As shown in fig. 1, the cleaning robot 100 is located on the floor, which may be the floor of a living room, office, or outdoor, etc. The place where the cleaning robot 100 is located includes an object such as a base station 200, a desk, a flowerpot, a sofa, a cabinet, or a bed.
The free walking of the cleaning robot 100 is mainly accomplished by means of several modules of mapping, positioning, navigation and obstacle avoidance. The operation control of the cleaning robot 100 is an important factor affecting normal walking. It will be appreciated that the cleaning robot 100, after sensing the surrounding environment, traces out one or more paths that do not collide with environmental obstacles and achieve full area coverage in accordance with certain area cleaning sequences or planning regulations. It will be appreciated that these modules are implemented by the sensors and corresponding control programs.
In some embodiments, a laser radar and/or a visible light camera is mounted on the cleaning robot 100, where the laser radar scans the surrounding environment where the cleaning robot 100 is located to obtain a laser point cloud. The visible light camera photographs the surrounding environment where the cleaning robot 100 is located, and acquires an image. The laser radar and the visible light camera are respectively connected with the controller in a communication way, the laser point cloud and the image are respectively sent to the controller, the controller calls a program for constructing a map preset in the memory of the cleaning robot 100, and the map is constructed based on the laser point cloud and/or the image. The map construction procedure may include a procedure corresponding to a SLAM algorithm (Simultaneous Localization and Mapping, SLAM), which will not be described in detail herein. In some embodiments, the map is a grid map. The map is saved in the memory of the cleaning robot 100. When the robot moves to work, the controller calls the map as the basis of autonomous positioning, path planning and obstacle avoidance.
It is understood that the SLAM algorithm has both positioning and navigation functions. In the positioning process, the laser radar is controlled to rotate at a high speed to emit laser, the distance between the cleaning robot and the obstacle is measured, and the relative position between the cleaning robot and the obstacle is judged by combining a map, so that positioning is realized. In some embodiments, the cleaning robot 100 may be visually positioned based on a visible light camera. In the navigation process, cleaning control is carried out based on positioning and cleaning tasks, each area to be cleaned is cleaned one by one, and a full-coverage cleaning path is planned in each area to be cleaned, so that the corresponding cleaning task is completed.
The cleaning robot 100 may be configured in any suitable shape in order to achieve a specific business function operation, for example, in some embodiments, the cleaning robot 100 may be a SLAM system-based mobile robot. Among them, the cleaning robot 100 includes, but is not limited to, a sweeping robot, a dust collecting robot, a mopping robot, a washing robot, or the like.
In some embodiments, the cleaning robot 100 may include a robot body, a lidar, a controller, and a running gear. The robot body is a main body structure of the cleaning robot 100, and can be made of a corresponding shape structure and manufacturing materials (such as hard plastic or metals including aluminum and iron) according to actual needs of the cleaning robot 100, for example, the cleaning robot body is generally flat and cylindrical.
The traveling mechanism is a structural device provided on the robot body to provide the cleaning robot 100 with a moving capability. The running gear may in particular be realized by any type of moving means, such as rollers, crawler-type wheels or the like.
The laser radar is arranged on the body of the cleaning robot 100 and is used for sensing the obstacle condition of the surrounding environment of the mobile cleaning robot 100, scanning to obtain laser point cloud data and sending the laser point cloud data to the controller so that the controller can establish a map and perform walking obstacle avoidance and the like based on the laser point cloud data. In some embodiments, the lidar comprises a pulsed lidar, a continuous wave lidar, or the like.
The controller is an electronic computing core built in the robot main body and is used for executing a logic operation step to realize intelligent control of the cleaning robot. The controller is in communication with the lidar for creating a map from the laser point cloud data and planning a cleaning path for the cleaning robot. The controller is also illustratively communicatively coupled to other sensors (e.g., cliff detection sensors or ultrasonic sensors, etc.) for detecting floor material (e.g., floor or carpet, etc.) based on data signals collected by the sensors, controlling the cleaning robot to avoid carpeting or to employ different cleaning modes based on the floor material, and detecting a sinking step, controlling the cleaning robot to avoid the step.
It is to be appreciated that the controller may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single-chip, ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. The controller may also be any conventional processor, controller, microcontroller, or state machine. A controller may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP and/or any other such configuration, or one or more of a micro-control unit (Microcontroller Unit, MCU), a Field-programmable gate array (Field-Programmable Gate Array, FPGA), a System on Chip (SoC).
In some embodiments, the robot body may further include a clean water tank, a sewage tank, a cleaner box, a dust box, and the like. In this embodiment, the base station 200 includes a base, a cleaning device, a water supply device, a dust collection device, a power supply device, a detergent replenishment device, and a sewage storage device (not shown). The base, the cleaning device, the water supply device, the dust collection device, the power supply device, the detergent replenishment device, and the sewage storage device are not shown in the drawing. The base is used for stopping the cleaning robot, the cleaning device is used for cleaning the cleaning robot, the water supply device is used for supplying water for the cleaning robot and/or supplying water for the cleaning device, and the dust collecting device is used for collecting dust in the dust collecting box of the cleaning robot. The power supply device is used for charging the cleaning robot. The sewage storage device is used for collecting sewage in the sewage tank of the cleaning robot.
It is understood that the surface of the base station 200 for approaching the cleaning robot 100 is matched with the outer shape of the cleaning robot. For example, the cleaning robot 100 is generally flat and cylindrical, and accordingly, the base station 200 is provided with an open docking cavity for accommodating the cleaning robot 100, and the docking cavity is semi-cylindrical and has an arc surface. In some embodiments, the radius of the docking cavity is 5-10 cm greater than the radius of the cleaning robot, so that the cleaning robot 100 is received in the docking cavity when the cleaning robot 100 is docked with the base station 200.
It should be noted that, according to the task to be completed, besides the above functional modules, one or more other different functional modules (such as a water storage tank, a mop, etc.) may be mounted on the main body of the cleaning robot, and cooperate with each other to perform the corresponding task.
The control method of some cleaning robots known by the inventor of the application needs to rely on a large number of sensors, and the sensors have single functions, high cost, large occupied space and high data processing integration degree.
For example, when a cleaning robot performing a mopping operation travels over a carpet or the like, there is a problem in that the cleaning robot becomes unable to leave the carpet, the mop gets stuck on the carpet, or the like. In some schemes, an ultrasonic sensor is added at the bottom of the cleaning robot, and the principle is that sound waves are absorbed more when the sound waves hit a carpet relative to the floor, so that the ground material can be judged according to different echo intensities. In some embodiments, the resistance is greater on the carpet based on the current value of the bottom roller brush motor, and thus the current value of the motor may also vary, based on which it may be determined whether or not to run on the carpet.
Further, the cleaning robot is also provided with a cliff sensor for detecting a stepped portion of the floor and controlling traveling according to the detected result. Typically, cliff sensors include an infrared transmitter and an infrared receiver.
Therefore, most control methods of cleaning robots need to rely on a large number of sensors, and the sensors have single functions, high cost and large occupied space, which is not beneficial to miniaturization design.
In view of the foregoing, some embodiments of the present application provide a control method, a cleaning robot, and a storage medium, where the control method is applied to the cleaning robot, the cleaning robot includes a camera and a light source, the camera is disposed on a chassis of the cleaning robot, the light source is disposed near the camera, so that the light source can illuminate a ground in a forward direction of the cleaning robot, and an image collected by the camera includes an area that the light source can illuminate. The method comprises the following steps: acquiring a current image shot by a camera, and identifying the ground material according to the current image to obtain an identification result; cliff detection is carried out according to the current image, and a detection result is obtained. And finally, controlling the cleaning robot to operate according to the identification result and/or the detection result.
In this embodiment, the ground material recognition and cliff detection can be accurately realized by integrating the image acquisition by the integrated vision sensor including the camera and the light source, and combining the image recognition and the vision ranging. That is, through fewer sensors, the environment can be accurately perceived, and the intelligent operation of the cleaning robot can be controlled. In addition, the two sensors corresponding to ground material identification and cliff detection are integrated, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.
Some embodiments of the present application provide a cleaning robot, please refer to fig. 2, the cleaning robot 100 includes a camera 101, at least one processor 103 of a light source 102, and a memory 104 (bus connection, one processor is exemplified in fig. 2).
Wherein the camera is provided on a chassis (not shown) of the cleaning robot 100. Illustratively, the camera 101 is disposed on the chassis 2cm-6cm in front of a universal wheel (not shown), where front refers to the direction of travel of the cleaning robot when in operation. The angle of the camera 101 can be flexibly adjusted according to the scene, and the view is adjusted. The light source 102 is disposed near the camera so that the light source 102 can illuminate the ground below the cleaning robot or the ground in the forward direction, and the image captured by the camera 101 includes an area that the light source 102 can illuminate. In one case, the light source 102 has a light range smaller than the field of view of the camera 101. In another case, the lighting range of the light source 102 is larger than the field of view of the camera 101.
In some implementations, the camera 101 may be a visible light camera or an invisible light camera (e.g., an infrared camera, etc.). Accordingly, the light source 102 may be a visible light source or an invisible light source (e.g., an infrared light source, etc.). In some implementations, the shape of the light emitted by the light source 102 may be changed or dynamically adjusted depending on the actual scene. Illustratively, the light source has at least two preset lamp light shapes, such as a bar, square, circle, or ring shape, etc. When the lamplight shape is a bar shape, the lamplight shape of the light projected on the ground by the light emitted by the light source is a bar shape. When the lamplight shape is square, the lamplight shape of the light projected on the ground emitted by the light source is square, and the square area is larger than the strip-shaped area. When the lamplight shape is annular, the lamplight shape of the light projected on the ground by the light source is annular.
It will be appreciated that the processor 103 is configured to provide computing and control capabilities to control the cleaning robot to perform any one of the control methods provided in the embodiments described below.
It is appreciated that the processor 103 may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The memory 104, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present application. The processor 103 may implement any of the control methods provided in the following embodiments by running non-transitory software programs, instructions, and modules stored in the memory 104. In particular, the memory 104 may include high-speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 104 may also include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In some embodiments, the cleaning robot 100 may also include other sensors such as lidar, gyroscopes, odometers, magnetic field meters, accelerometers, or speedometers. It is understood that the structure illustrated in the present embodiment does not constitute a limitation of the cleaning robot 100. In some implementations, the cleaning robot 100 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
As can be appreciated from the above, the control method provided by the embodiments of the present application may be implemented by a cleaning robot, for example by one or more processors of the cleaning robot. In some embodiments, the control method provided in the embodiments of the present application may be implemented and executed by other devices having computing processing capabilities. Other devices with computing processing capabilities may be smart devices, such as servers, etc., communicatively connected to the cleaning robot.
The control method provided by the embodiment of the present application is described below in connection with exemplary applications and implementations of the cleaning robot provided by the embodiment of the present application. Referring to fig. 3, fig. 3 is a flow chart of a control method according to an embodiment of the present application. It is understood that the execution subject of the control method may be one or more processors of the cleaning robot.
As shown in fig. 3, the method S100 includes, but is not limited to, the following steps:
s10: and acquiring a current image shot by the camera.
It will be appreciated that since the light source shines towards the floor, the camera can clearly capture visual data of the floor below the cleaning robot or of the floor in the forward direction, creating an image. In some embodiments, the cameras acquire at a frequency, such as 90 frames per second, etc. At least one processor in the cleaning robot controls the light source to be turned on while the cleaning robot walks, and the camera collects images at a certain frequency, so that an image sequence with time sequence can be collected. The image obtained at the current moment is the current image, and it can be understood that the current image is continuously updated along with the acquisition action.
The current image can reflect the ground condition of the cleaning robot under the current position, including the ground material (floor or carpet, etc.), the sinking step, or the ground object (e.g., table and chair legs, winding, shoes, garbage can, etc.), etc. It will be appreciated that when there are fewer obstacles on the floor where the cleaning robot is located, a uniform floor is also possible in the current image.
After the camera acquires the current image, the current image is sent to the processor, so that the processor can acquire the current image.
S20: and carrying out ground material identification according to the current image to obtain an identification result.
In some embodiments, the floor material includes the class of floors, carpeting, or cobblestone floors. Alternatively, carpeting may be further subdivided into carpeting or smooth mats, etc., and flooring may be further subdivided into tiles or wooden floors, etc. The person skilled in the art can set the type of the floor material based on the actual condition of the floor in the actual application scene and the running condition of the cleaning robot on different floor materials, and the type of the floor material is not limited. For example, in a home setting, common ground materials include floors or carpets. Considering that the cleaning robot walks on the carpet to easily catch the wheels or the suction cup ports, the cleaning robot needs to avoid the carpet, in this scenario, ground materials including floors and carpets may be set to guide the subsequent collection of training data.
The memory of the cleaning robot is internally provided with a pre-trained recognition model which is obtained by training the neural network by a person skilled in the art through collected training data. In some embodiments, the neural network may be a network such as RCNN, YOLO, or SSD.
It will be appreciated that the training data covers the class of objects that the recognition model is able to recognize. For example, data collection is performed on common objects in an application scene of the cleaning robot, and category labels are marked, such as: collecting and obtaining a plurality of images including floors, carpets, winding, shoes, pet cats, dogs, garbage cans, table legs and the like, training a neural network by adopting the plurality of images, deploying the neural network and model parameters obtained by training into a memory of the cleaning robot to obtain an identification model, and enabling a subsequent processor to conveniently call the identification model to conduct real-time reasoning and detection.
The processor invokes the recognition model, inputs the current image into the recognition model, outputs the class of each object in the current image, such as outputting the class of ground materials, etc. Thus, the ground material category can be used as the recognition result.
S30: cliff detection is carried out according to the current image, and a detection result is obtained.
As will be appreciated by those skilled in the art, in the field of cleaning robots, cliff detection refers to detecting a change in the height of the floor, for example detecting the height of a staircase, a sunken step or other cliff-like object, to avoid dropping or falling the cleaning robot. When the cleaning robot approaches the edge of the cliff, the cleaning robot determines the detection result by measuring the height of the floor to its base (i.e., the cliff height). For example, when the cliff height is greater than or equal to a preset height, the detection result is determined to be cliff, and when the cliff height is less than the preset height, the detection result is determined to be non-cliff. The detection result is beneficial to controlling the cleaning robot to operate, for example, when the detection result is cliff, the cleaning robot is controlled to stop immediately, and falling is prevented.
In this embodiment, cliff detection is performed based on the current image, i.e. the cliff height is determined based on the current image, and the detection result is determined. Here, the specific procedure of the image-based ranging will be described in detail below, and reference may be made to the specific implementation procedure of step S33 below.
In some embodiments, in order to reduce the problem that the brightness of the current image is affected by the brightness of the light source, and thus the ranging accuracy is affected, the step S30 is specifically configured to include the following steps:
s31: and ranging according to the current image to obtain the initial height.
The current image is acquired before the light source power is adjusted, and at this time, the light source power may not adapt to the height from the floor to the chassis of the cleaning robot due to the change of the road condition of the floor. Therefore, the height obtained based on the current image ranging is taken as the initial height. Here, as for a specific procedure of ranging according to the current image, which will be described in detail below, reference may be made to a specific implementation procedure of step S33 below.
S32: the power of the light source is adjusted according to the initial height, wherein the power of the light source is positively correlated with the initial height.
If the detected initial height is larger, the power of the light source is increased, and if the detected initial height is smaller, the power of the light source is reduced, so that the power of the light source is matched with the initial height (distance), and higher ranging accuracy can be obtained at different distances.
In some embodiments, the power level of the light source is related to the initial height as follows:
where Power is the Power of the light source, x is the initial height, a, b, c are model parameters.
In this embodiment, the power of the light source is positively correlated with the initial height, i.e. the power of the light source is positively correlated with the distance of the floor to the cleaning robot chassis. Along with the increase of the detected initial height, if cliffs are possibly in front of the detected initial height, the power of the light source is increased, and further fine detection is performed; as the detected initial height decreases, indicating that the front may not be cliff, the power of the light source is reduced and further fine inspection is performed. On the one hand, the error caused by insufficient power of the light source can be reduced, and on the other hand, the power of the light source is dynamically adjusted, so that the energy consumption can be saved.
S33: and after the power of the light source is regulated, acquiring and updating a current image, and carrying out distance measurement according to the current image to obtain the cliff height.
It will be appreciated that the camera is constantly capturing images at a certain frequency (e.g. 90 frames/second), so that after adjusting the power of the light source, a new image obtained in real time is used as the current image to update the current image. Then, ranging is performed according to the updated current image, and the height obtained at this time is taken as the cliff height. Among them, a specific procedure for ranging from the current image will be described in detail below.
In this embodiment, since the adjusted light source power is adapted to the initial height, the initial height can primarily reflect the cliff height, so that the cliff height can be further accurately and precisely detected under the adjusted light source power, and the problem of large error of the cliff height due to insufficient power of the light source is reduced, so that the cliff height is accurate.
In some embodiments, the ground material identification may also be performed by using the current image obtained after the power adjustment, and the specific implementation process refers to step S20, which is not repeated here.
In some embodiments, the camera may be a binocular camera, and based on the ranging principle of the binocular camera, ranging may be performed based on the current image. The distance measurement principle of the binocular camera is the prior art, and is not described in detail here.
However, in binocular ranging, there may be problems such as unstable ranging system due to non-uniform focal length, high hardware cost, and complex algorithm integration. Based on this, in some embodiments, the camera is a monocular camera, i.e. only one camera is needed, and the structure is simple, and the construction environment is convenient.
In monocular ranging, a distance (cliff height) is determined from two frames of images before and after. In this embodiment, the current image includes a current frame image and a previous frame image. It is understood that the current frame image refers to an image acquired at the current moment in the walking process of the cleaning robot, and the previous frame image is an image located before the current frame image in the time sequence data.
In this embodiment, the step S33 of "ranging according to the current image to obtain the cliff height" specifically includes:
s331: the current frame image and the previous frame image are divided into a plurality of image blocks, respectively.
For example, for a current frame image or a previous frame image, it is divided into H image blocks of m×n size according to size. The two image blocks are divided in the same way, and the image blocks at the same position are corresponding image blocks. For example, the first image block in the upper left corner of the current image frame corresponds to the first image block in the upper left corner of the previous frame image, and corresponds to two image blocks corresponding to the previous and subsequent frames.
S332: and determining a first distance according to the two image blocks corresponding to the front frame and the rear frame.
Here, the two image blocks with respect to the preceding and following frames refer to two image blocks located at the same position in the preceding and following frames. It can be appreciated that, since the frame rate of the camera is high, the moving distance of the cleaning robot is small between the front and rear frames, and thus, the same content is included in the current frame image and the previous frame image, and the difference is small. That is, the image contents are approximately the same for two image blocks with respect to the preceding and following frames.
Since it is highly likely that two image blocks have the same object. Thus, the first distance can be determined from two image blocks corresponding to the previous and subsequent frames. The first distance is a distance determined based on the image block.
In some embodiments, the step S332 specifically includes:
s3321: and extracting corresponding target points in the two image blocks.
Wherein the target point is an object feature point in the world coordinate system. Because the contents of the two image blocks are similar, corresponding target points in the two image blocks can be identified and extracted in a matching way. For example, if the legs are included in both image tiles, the target point may be a point where the legs contact the floor. If the two image blocks include corners, the target point may be a corner vertex. If both image tiles include a carpet edge, the target point may be a point on the carpet edge.
In some embodiments, the foregoing step S3321 specifically includes:
(1) Objects in the two image blocks are identified and object matching is performed.
The recognition model can be used for recognizing objects in the two image blocks and determining the types of the objects in the two image blocks. Wherein the recognition model has been described in detail in step S20, the description is not repeated here.
Object matching refers to determining whether the same object is included in both image blocks. For example, if the two image blocks are identified to include the table leg, the features of the table legs in the two image blocks are compared, and if the features are similar or identical, it is determined that the same table leg exists in the two image blocks, that is, a matching object exists.
(2) If the matching object exists, taking one characteristic point of the matching object as a target point.
For example, if the matching object is a leg, one characteristic point of the leg that contacts the ground is taken as the target point.
(3) And if no matching object exists, carrying out pixel feature recognition and pixel feature matching on the two image blocks.
That is, in the case where an object cannot be matched, pixel feature matching is performed. Wherein the pixel features comprise corner features or contour features. Pixel feature matching refers to determining whether the same pixel feature is included in both image blocks. In some embodiments, corner features in the image block may be identified using a corner detection algorithm that uses existing corner detection algorithms, such as the Kitchen-Rosenfeld corner detection algorithm, harris corner detection algorithm, KLT corner detection algorithm, or SUSAN corner detection algorithm, etc. In some embodiments, a sobe l, canny algorithm or the like may be used to perform contour extraction on the image block, and contour features in the image block are extracted.
After identifying the pixel features (including corner features and contour features) in the two image blocks, it is determined whether the same pixel features are included in the two image blocks for pixel feature matching. If the same pixel characteristics are included, the matched pixel points exist.
(4) And if the matched pixel points exist, taking the matched pixel points as target points.
Illustratively, if the matching pixel point is a corner point, the corner point is taken as the target point.
In this embodiment, for two corresponding image blocks, object matching is performed first, and then pixel feature matching is performed, so that on one hand, the target point can be accurately extracted, which is beneficial to the accuracy of the subsequent calculation distance; on the other hand, the target point can be determined quickly, and the calculation force is saved.
In some embodiments, the foregoing step S3321 further includes:
(5) If the matching pixel points do not exist, merging the two image blocks with at least one adjacent image block around the two image blocks.
It will be appreciated that if no matching object or no matching pixel is detected in the two image blocks, there may be a case where the two image blocks are homogeneous floors, thereby enlarging the detection range. And merging the image blocks in the current frame image with at least one (e.g. two or more) adjacent image blocks around the current frame image, and merging the image blocks in the previous frame image with at least one (e.g. two or more) adjacent image blocks around the previous frame image in the same way to obtain two merged image blocks. It will be appreciated that merging image blocks can increase the detection range, facilitating the determination of the target point.
(6) And performing object matching on the two combined image blocks, and then performing pixel characteristic matching to extract a target point.
Similarly, for the two combined image blocks, according to the steps (1) - (4), object matching is performed first, and then pixel feature matching is performed, so as to extract the target point.
It will be appreciated that if the two image blocks after the merging still cannot extract the target point, the two image blocks may be continuously merged with at least one surrounding image block, and according to the steps (1) - (4), the object matching is performed first, and then the pixel feature matching is performed until the target point is extracted or all the image blocks are merged.
In this embodiment, for a plurality of image blocks, the target point is extracted first, and if the target point cannot be extracted, the target point is extracted after being combined, and the image content is concise and has small interference when the object features or the pixel features are identified each time, which is beneficial to improving the accuracy of the target point.
In some embodiments, the method S100 further comprises:
(7) If all the image blocks are combined and the target points are not matched and extracted, controlling the camera to rotate by a preset angle, and then obtaining a new current frame image and a new previous frame image, and matching and extracting the corresponding target points in the current frame image and the previous frame image.
The merging of all image blocks together also does not allow to determine the target point, that is to say that the same object, corner or contour does not exist in the current frame image or in the previous frame image. At this time, the processor controls the camera to rotate a preset angle and then collect images, and a new current frame image and a new previous frame image are obtained. The preset angle can be 20 degrees, 30 degrees, 40 degrees, 50 degrees, 60 degrees or the like.
Illustratively, the camera is controlled to rotate by 60 degrees clockwise or anticlockwise, images are acquired, and the current frame image and the previous frame image are updated. For the updated current frame image and the previous frame image, the object matching can be performed first according to the steps (1) - (6), then the pixel matching is performed, if the target point is not extracted, then the merging matching (the object matching and the pixel matching) is performed, until the target point is extracted or all the image blocks are merged.
After all the image blocks are combined, the target point still cannot be determined, and the step (7) can be repeated until the target point is extracted.
In this embodiment, by adjusting the angle of the camera, the view field is changed, so that the possibility that the current frame image and the previous frame image are matched with the target point is increased, the cleaning robot can find the target point at the current position, and the interference to cliff detection caused by the problem of the view angle of the camera is reduced.
In some embodiments, the method S100 further comprises:
(8) If the rotation times of the camera reach the preset times and the target point is not matched and extracted, cliff detection at the current position is stopped.
For example, the preset number of times may be 6, and if the rotation number of the camera reaches 6 times and the target point is not extracted by matching, cliff detection at the current position is stopped. Along with the movement of the cleaning robot, after the camera captures an image of the next position and updates the current image, cliff detection is performed.
In this embodiment, the limitation condition of the rotation times is set, so that errors caused by a small number of image defects at the current position can be reduced, and the image at the next position is adopted in time for detection, so that cliff detection is smoothly performed, cliff detection is facilitated in time, and the cliff falling risk of the cleaning robot is reduced.
S3322: and determining a first distance according to the image positions of the target points in the two image blocks and the camera parameters.
As can be seen from the above, the target point is an object feature point in the world coordinate system. The target point is present in both image blocks. Therefore, the first distance can be determined according to the image positions of the target points in the two image blocks and the camera parameters by utilizing the triangle ranging principle.
Referring to fig. 4, a point P is a target point, a point P1 is an image position of the target point P in a previous frame image, a point P2 is an image position of the target point P in a current frame image, a point O is a light center position of the camera when the current frame image is captured, and a point O' is a light center position of the camera when the previous frame image is captured. The distance b between points O and O' is a baseline, and it is understood that the baseline is the translation distance of the camera when the previous frame image and the current frame image are captured.
According to the principle of triangle ranging, a first distance from the target point P to the base line b can be obtained, namely the measured first distance is as follows: z=f×b/d
Wherein x is a first distance, f is a focal length of the camera, b is a base line, d is a parallax value of frames before and after falling on the basis of the same feature, and d= |ul-ur|.
In this embodiment, the first distance can be accurately determined by using the principle of triangulation using the front and rear two frames of images acquired by the monocular camera.
It will be appreciated that, for the image blocks where the target point is present, the first distance is calculated as in step S3322 described above.
S333: the cliff height is determined based on the plurality of first distances.
It will be appreciated that one tile theory may obtain one first distance, and thus, H tiles correspond to H first distances. In some possible cases, only a floor of uniform material is included in the image block, and the first distance cannot be determined. That is, the number of the plurality of first distances is less than or equal to H.
It will be appreciated that each first distance reflects the distance between the ground and the camera, and that these first distances are not exactly the same due to calculation errors. Thereby, the cliff height is determined from the plurality of first distances. Illustratively, the mean of the first distances is taken as the cliff height, or the mode of the first distances is taken as the cliff height.
In this way, the present frame image and the previous frame image are divided into the areas, the first distances are calculated by the areas, and the cliff height is determined based on the plurality of first distances, so that on one hand, the real-time performance can be improved, the recognition computing power resource can be saved, and on the other hand, the accuracy of the cliff height can be improved.
S34: and determining a detection result according to the cliff height.
Illustratively, the detection result is determined to be cliff when the cliff height is greater than or equal to a preset height, and the detection result is determined to be non-cliff when the cliff height is less than the preset height. The detection result is beneficial to controlling the cleaning robot to operate, for example, when the detection result is cliff, the cleaning robot is controlled to stop immediately, and falling is prevented.
S40: and controlling the cleaning robot to operate according to the identification result and/or the detection result.
It will be appreciated that if the cleaning robot encounters a carpet or cliff etc. during travel, it needs to be braked in time to reduce the risk of wheels or suction cups getting stuck on the carpet and falling down to the cliff. And controlling the cleaning robot to run according to the identification result and/or the detection result, so that the cleaning robot can run intelligently.
In this embodiment, the ground material recognition and cliff detection can be accurately realized by integrating the image acquisition by the integrated vision sensor including the camera and the light source, and combining the image recognition and the vision ranging. That is, through fewer sensors, the environment can be accurately perceived, and the intelligent operation of the cleaning robot can be controlled. In addition, the two sensors corresponding to ground material identification and cliff detection are integrated, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.
In some embodiments, the light source has at least two preset light shapes. At least two preset lamp light shapes have been described above and will not be repeated here.
The method S100 further comprises:
s50: the lamp shape of the initial default light source is the first shape.
Illustratively, the first shape is a bar shape. The light shape in the image shot by the camera is long strip.
S60: and if the confidence coefficient of the recognition result is smaller than or equal to the first threshold value, adjusting the lamplight shape of the light source to a second shape, wherein the irradiation area of the second shape is larger than that of the first shape.
It can be understood that when the recognition model outputs the recognition result (the ground material class), the confidence is correspondingly output. In the machine learning field, the confidence reflects the authenticity of the predicted output result.
When the confidence level of the recognition result is less than or equal to a first threshold (e.g., 80%), it is indicated that the recognized ground material class lacks accuracy. Thus, the lamp light shape of the light source is adjusted to a second shape having a larger irradiation area, and the second shape is, for example, rectangular or square.
The area of the lamplight shape in the image is larger when the lamplight shape is the second shape than when the lamplight shape is the first shape, so that the image comprises more ground characteristics, and the ground material identification and cliff detection based on the current image are more accurate. In addition, the lamp light shape is dynamically adjusted, so that energy consumption is saved.
In some embodiments, the method S100 further comprises:
s70: and in the case that the lamplight shape of the light source is the first shape, identifying the length or the width of the lamplight shape in the current image.
S80: and adjusting the lamp light shape of the light source to the second shape when the length of the lamp light shape is smaller than or equal to the second threshold value or the width is smaller than or equal to the third threshold value.
When the lamplight shape of the light source is the first shape, the lamplight shape in the image is long-strip-shaped. In this embodiment, the field of view of the camera is greater than the light range of the first shape light source, so that the image can reflect the light shape. The length and width of the lamp light shape (long strip shape) can be determined from the image through pixel identification.
When the length of the light shape is monitored to be smaller than or equal to the second threshold value or the width is monitored to be smaller than or equal to the third threshold value, the possibility of cliff occurrence can be preliminarily determined, and at the moment, the light shape of the light source is adjusted to be the second shape so as to acquire a new current image, and then cliff detection is performed. Wherein the second threshold and the third threshold may be set based on the actual situation of the bar light. If the cliff height detected under the second-shape light source is greater than or equal to the preset height, stopping immediately and controlling the cleaning robot to retreat. And if the cliff height detected under the light source with the second shape is smaller than the preset height, controlling the cleaning robot to continue to operate, and adjusting the lamplight shape of the light source to the first shape.
In this embodiment, the second shape has a large light irradiation area, so that the characteristics of the effective field of view of the camera are obvious, and therefore, the cliff height can be measured more accurately, and cliff detection is more accurate. In addition, the light source of the first shape is adopted for preliminary detection, and then the light source of the second shape is adopted for detailed detection, so that the lamplight shape of the light source is dynamically adjusted between the first shape and the second shape, and the energy consumption is saved.
In some embodiments, the method S100 further comprises:
s101: when the cliff height in the detection result is abnormal, the cliff detection is performed again.
It will be appreciated that an anomaly may be indicated if the cliff height is less than the chassis height of the cleaning robot or if the cliff height is greater than the usual step height.
There may be various reasons for abnormality in cliff detection, such as systematic errors, calculation errors due to insignificant feature points, or distortion errors generated during operation. Thus, in this position, the processor again ranges based on the current image, detecting cliff height. In some embodiments, the light source may be lit at the current adjusted power, or at a preset fixed value.
S102: if the detected cliff height is still abnormal, stopping cliff detection by the current image, and acquiring and updating the current image.
It can be understood that, because the frame rate of the camera is higher, when the cliff detection is performed with the current image and an abnormality occurs, the cleaning robot continues to walk, the walking distance is small, and even if the cliff is in front, the walking safety can be ensured. In addition, cliff detection with the current image is stopped, and the current image is acquired and updated. And the cliff detection is carried out by the updated current image, so that the smooth progress of the cliff detection is facilitated, and the cliff detection is timely and accurate.
To prevent a specific object (e.g., a wire) from being caught in a wheel or a roller brush of the cleaning robot, the cleaning robot is caught, and in some embodiments, a distance of the specific object is recognized based on an image acquired by a camera to control the cleaning robot to maintain a certain distance from the specific object during operation.
As shown in fig. 5, X is the length of a specific object, and is stored in the cleaning robot in advance. X' is the length of a specific object on the imaging plane of the sensor through the optical center of the camera, f is the distance, and z is the object distance.
Thus, the expression of the object distance Z is as follows:
in this embodiment, the length of a particular object is known so that the distance of the feature object to the camera can be obtained according to the imaging principles. Based on the distance, the cleaning robot is controlled to keep a certain distance from the specific object during operation, so that the cleaning robot is prevented from being trapped too close to the specific object.
In summary, the control method provided by the embodiment of the application is applied to a cleaning robot, the cleaning robot comprises a camera and a light source, the camera is arranged on a chassis of the cleaning robot, the light source is arranged near the camera, therefore, the light source can illuminate the ground in the advancing direction of the cleaning robot, and an image collected by the camera comprises an area which can be illuminated by the light source. The method comprises the following steps: acquiring a current image shot by a camera, and identifying the ground material according to the current image to obtain an identification result; cliff detection is carried out according to the current image, and a detection result is obtained. And finally, controlling the cleaning robot to operate according to the identification result and/or the detection result.
In this embodiment, the ground material recognition and cliff detection can be accurately realized by integrating the image acquisition by the integrated vision sensor including the camera and the light source, and combining the image recognition and the vision ranging. That is, through fewer sensors, the environment can be accurately perceived, and the intelligent operation of the cleaning robot can be controlled. In addition, the two sensors corresponding to ground material identification and cliff detection are integrated, so that the problem of single function of the sensor can be effectively solved, and the requirements of circuit integration and robot miniaturization are met.
The embodiment of the application also provides a computer-readable storage medium, which stores computer-executable instructions for causing an electronic device to execute the control method provided by the embodiment of the application.
In some embodiments, the storage medium may be FRAM, ROM, PROM, EPROM, EE PROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper TextMarkup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device (including devices such as smart terminals and servers) or on multiple computing devices located at one site, or on multiple computing devices distributed across multiple sites and interconnected by a communication network.
The present application also provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform a control method as in the previous embodiments.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the present application as described above, which are not provided in details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A control method applied to a cleaning robot, the cleaning robot including a camera and a light source, the camera being disposed on a chassis of the cleaning robot, the light source being disposed near the camera, the method comprising:
acquiring a current image shot by the camera;
performing ground material identification according to the current image to obtain an identification result;
cliff detection is carried out according to the current image, and a detection result is obtained;
and controlling the cleaning robot to run according to the identification result and/or the detection result.
2. The method according to claim 1, wherein the step of performing cliff detection according to the current image to obtain a detection result includes:
ranging according to the current image to obtain an initial height;
adjusting the power of the light source according to the initial height, wherein the power of the light source is positively correlated with the initial height;
after the power of the light source is regulated, a current image is obtained and updated, and distance measurement is carried out according to the current image, so that the cliff height is obtained;
and determining the detection result according to the cliff height.
3. The method of claim 2, wherein the camera is a monocular camera, the current image includes a current frame image and a previous frame image, the ranging is performed according to the current image to obtain a cliff height, and the method comprises:
Dividing a current frame image and a previous frame image into a plurality of image blocks respectively;
determining a first distance according to two image blocks corresponding to the front frame and the rear frame;
and determining the cliff height according to a plurality of first distances.
4. A method according to claim 3, wherein said determining the first distance from two image blocks corresponding to the previous and subsequent frames comprises:
extracting corresponding target points in the two image blocks;
and determining the first distance according to the image positions of the target points in the two image blocks and camera parameters.
5. The method of claim 4, wherein the extracting the corresponding target point in the two image blocks comprises:
identifying objects in the two image blocks and performing object matching;
if a matching object exists, taking a characteristic point of the matching object as the target point;
if no matching object exists, carrying out pixel feature recognition and pixel feature matching on the two image blocks, wherein the pixel features comprise corner features or contour features;
and if the matching pixel point exists, taking the matching pixel point as the target point.
6. The method of claim 5, wherein the method further comprises:
if the matching pixel points do not exist, merging the two image blocks with at least one adjacent image block around the two image blocks respectively;
and carrying out object matching on the two combined image blocks, then carrying out pixel characteristic matching, and extracting the target point.
7. The method of claim 6, wherein the method further comprises:
and if all the image blocks are combined and are not matched and extracted to obtain the target point, controlling the camera to rotate by a preset angle, and obtaining a new current frame image and a new previous frame image to be matched and extracted to obtain the corresponding target point in the current frame image and the previous frame image.
8. The method of claim 7, wherein the method further comprises:
if the rotation times of the camera reach the preset times and the target point is not matched and extracted, cliff detection at the current position is stopped.
9. The method of any one of claims 1-8, wherein the light source has at least two preset light shapes, the method further comprising:
initially defaulting the lamp light shape of the light source to a first shape;
And if the confidence coefficient of the identification result is smaller than or equal to a first threshold value, adjusting the lamplight shape of the light source to a second shape, wherein the irradiation area of the second shape is larger than that of the first shape.
10. The method according to claim 9, wherein the method further comprises:
identifying the length or width of the light shape in the current image under the condition that the light shape of the light source is the first shape;
and adjusting the lamplight shape of the light source to the second shape when the length of the lamplight shape is smaller than or equal to a second threshold value or the width of the lamplight shape is smaller than or equal to a third threshold value.
11. The method according to claim 1, wherein the method further comprises:
re-detecting the cliff under the condition that the cliff height in the detection result is abnormal;
if the detected cliff height is still abnormal, stopping cliff detection by the current image, and acquiring and updating the current image.
12. A cleaning robot, comprising:
the camera is arranged on the chassis of the cleaning robot, and the light source is arranged near the camera;
At least one processor in communication with the camera and the light source, respectively;
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the control method of any one of claims 1-11.
13. A computer-readable storage medium storing computer-executable instructions for causing a computer device to perform the control method according to any one of claims 1-11.
CN202410202989.8A 2024-02-23 2024-02-23 Control method, cleaning robot and storage medium Pending CN117837987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410202989.8A CN117837987A (en) 2024-02-23 2024-02-23 Control method, cleaning robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410202989.8A CN117837987A (en) 2024-02-23 2024-02-23 Control method, cleaning robot and storage medium

Publications (1)

Publication Number Publication Date
CN117837987A true CN117837987A (en) 2024-04-09

Family

ID=90530462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410202989.8A Pending CN117837987A (en) 2024-02-23 2024-02-23 Control method, cleaning robot and storage medium

Country Status (1)

Country Link
CN (1) CN117837987A (en)

Similar Documents

Publication Publication Date Title
CN108247647B (en) Cleaning robot
AU2017228620B2 (en) Autonomous coverage robot
US20210049376A1 (en) Mobile robot, control method and control system thereof
US20180353042A1 (en) Cleaning robot and controlling method thereof
US9751210B2 (en) Systems and methods for performing occlusion detection
US20190086933A1 (en) Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems
WO2019007038A1 (en) Floor sweeping robot, floor sweeping robot system and working method thereof
EP2888603B1 (en) Robot positioning system
CN110325938B (en) Electric vacuum cleaner
CN110989630B (en) Self-moving robot control method, device, self-moving robot and storage medium
GB2570240A (en) Electric vacuum cleaner
JP2015092348A (en) Mobile human interface robot
CN110989631A (en) Self-moving robot control method, device, self-moving robot and storage medium
EP3782771A1 (en) Robot and control method therefor
CN110325089B (en) Electric vacuum cleaner
WO2022135556A1 (en) Cleaning robot and cleaning control method therefor
CN117837987A (en) Control method, cleaning robot and storage medium
WO2021259128A1 (en) Autonomous mobile device and control method therefor
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
AU2015224421B2 (en) Autonomous coverage robot
TWI824503B (en) Self-moving device and control method thereof
CN113741417A (en) Method, robot and readable storage medium for cleaning manure leaking plates in livestock shed
CN115607052A (en) Cleaning method, device and equipment of robot and cleaning robot
KR20220121483A (en) Method of intelligently generating map and mobile robot thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination