CN113625700A - Self-walking robot control method, device, self-walking robot and storage medium - Google Patents

Self-walking robot control method, device, self-walking robot and storage medium Download PDF

Info

Publication number
CN113625700A
CN113625700A CN202010382048.9A CN202010382048A CN113625700A CN 113625700 A CN113625700 A CN 113625700A CN 202010382048 A CN202010382048 A CN 202010382048A CN 113625700 A CN113625700 A CN 113625700A
Authority
CN
China
Prior art keywords
area
explored
obstacle
walking
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010382048.9A
Other languages
Chinese (zh)
Other versions
CN113625700B (en
Inventor
王磊
吴震
谢濠键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Rockrobo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockrobo Technology Co Ltd filed Critical Beijing Rockrobo Technology Co Ltd
Priority to CN202010382048.9A priority Critical patent/CN113625700B/en
Publication of CN113625700A publication Critical patent/CN113625700A/en
Application granted granted Critical
Publication of CN113625700B publication Critical patent/CN113625700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Provided are a self-walking robot control method, a self-walking robot control device, a self-walking robot, and a storage medium. The method comprises the following steps: selecting a position as a central point of an area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed; moving to the central point to search for obstacles around, thereby recording the positions of the detected obstacles and marking the area to be searched as a searched area; and repeatedly executing all the steps until the obstacle search is completed for all the areas related to the task. According to the invention, the self-walking robot is not easy to collide with obstacles.

Description

Self-walking robot control method, device, self-walking robot and storage medium
Technical Field
The invention belongs to the field of intelligent robots, and particularly relates to a self-walking robot control method and device, a self-walking robot and a storage medium.
Background
With the improvement of living standard and the development of technology, the self-walking robot has wide application. The self-walking robot can automatically execute cleaning operation, and automatically clean the area to be explored through direct cleaning, mopping, vacuum dust collection and other modes.
During the cleaning process, the self-walking robot can detect the obstacles possibly encountered in the current working path in real time and execute corresponding obstacle avoidance operation. However, in some scenes, such as a corner, the position of the obstacle is not easy to be found immediately, and in addition, the existing self-walking robot usually detects the obstacle while cleaning, so that the self-walking robot can collide with the obstacle in time in the cleaning process, the self-walking robot or articles in a home are damaged, and the user experience is not good. In addition, the existing self-walking robot is easy to leave a missed scanning area in cleaning, so that the missed scanning phenomenon occurs.
Disclosure of Invention
The invention provides a self-walking robot control method, a self-walking robot control device, a self-walking robot and a computer readable storage medium, aiming at solving the problem that the self-walking robot in the prior art is easy to collide with an obstacle.
According to a first aspect of the present invention, there is provided a self-walking robot control method, the method comprising:
selecting a position as a central point of an area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed;
moving to the central point to explore obstacles around, recording the positions of the detected obstacles, and completing exploration of the current area to be explored;
and repeatedly executing all the steps until all the areas related to the task are searched.
Further, the moving to the central point for obstacle exploration includes:
and moving to the central point and performing self-rotation to obtain image information in the area to be searched, and judging whether an obstacle exists in the area to be searched or not according to the image information.
According to the method of the present invention, further, said moving to the central point and spinning to obtain image information within the area to be explored comprises:
moving to the central point and continuously rotating for at least one circle to acquire image information in the area to be explored; alternatively, the first and second electrodes may be,
moving to the central point and stopping at the central point every time of rotating a preset angle to acquire image information in the area to be explored.
According to the method of the present invention, further, the predetermined angle is equal to or smaller than a field angle of an apparatus for acquiring the image information.
According to the method of the present invention, further, the method further comprises:
after the obstacle is moved to the central point to explore the obstacles around, walking is started in the explored area, the explored area is traversed, and obstacle avoidance operation is executed when the obstacle is detected again in the walking traversal process; or
After the obstacle exploration is finished for all the areas related to the task, the user starts to walk, traverses all the areas related to the task, and executes obstacle avoidance operation when the obstacles are detected again in the walking traversal process; or
If the area related to the task comprises more than 2 subareas, walking is started after the exploration is finished in each pair of subareas, and the current subarea is traversed; and after the exploration and walking of the current subarea are finished, exploring and walking the next subarea until all areas related to the task are traversed, and executing obstacle avoidance operation when the obstacle is detected again in the walking and traversing process.
According to the method of the present invention, further, the method further comprises:
in the process of executing the task, marking the area which is already walked by the self-walking robot as a walked area;
the walking in the explored area specifically comprises: and the walking action at least traverses the areas except the walking area in the area to be explored.
According to the method of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
according to a preset area, selecting a preset area with a geometric center on the boundary of the searched area, overlapping the searched area and having the smallest overlapping area as a to-be-searched area on the boundary, and taking the geometric center of the to-be-searched area as the central point.
According to the method of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
when a task is started to be executed, a point is selected as a central point of a region to be explored at will, and a preset region is constructed by taking a preset area as a region area and is used as the region to be explored.
According to the method of the invention, further, the area to be explored comprises any one of the following shapes: circular, rectangular or polygonal.
According to the method of the present invention, further, the method further comprises:
after judging that the obstacle exists in the area to be explored through the image information, carrying out type recognition on the obstacle;
and when walking in the searched area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
According to a second aspect of the present invention, there is provided a self-walking robot control device comprising:
the central point selection unit is used for selecting one position as a central point of the area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed;
the obstacle exploration recording unit is used for moving to the central point to explore obstacles around, recording the positions of the detected obstacles and completing exploration of the current area to be explored;
and the repeated execution unit is used for repeatedly executing all the steps until all the areas related to the task are explored.
The apparatus according to the present invention further comprises a controller for moving to the central point to perform obstacle search around, the controller comprising:
and moving to the central point and performing self-rotation to obtain image information in the area to be searched, and judging whether an obstacle exists in the area to be searched or not according to the image information.
The apparatus according to the present invention, further, the moving to the central point and the self-rotating to obtain the image information in the area to be explored, includes:
moving to the central point and continuously rotating for at least one circle to acquire image information in the area to be explored; alternatively, the first and second electrodes may be,
moving to the central point and stopping at the central point every time of rotating a preset angle to acquire image information in the area to be explored.
According to the apparatus of the present invention, further, the predetermined angle is equal to or smaller than a field angle of the apparatus for acquiring the image information.
According to the device, after the device moves to the central point to search the barriers around, the device starts to walk in the searched area to traverse the searched area; or after the obstacle exploration is finished on all the areas related to the task, the user starts to walk to traverse all the areas related to the task; or if the area related to the task comprises more than 2 subareas, starting to walk after the exploration is finished in each pair of subareas, and traversing the current subarea; after the exploration and walking of the current subarea are finished, the exploration and walking of the next subarea are started until all areas related to the task are traversed; and the apparatus further comprises: and the obstacle avoidance operation execution unit is used for executing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
The device according to the invention further comprises:
the walking area marking unit is used for marking the area which is walked by the self-walking robot as a walking area in the process of executing the task;
the walking in the explored area specifically comprises: and the walking action at least traverses the areas except the walking area in the area to be explored.
According to the apparatus of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
according to a preset area, selecting a preset area with a geometric center on the boundary of the searched area, overlapping the searched area and having the smallest overlapping area as a to-be-searched area on the boundary, and taking the geometric center of the to-be-searched area as the central point.
According to the apparatus of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
when a task is started to be executed, a point is selected as a central point of a region to be explored at will, and a preset region is constructed by taking a preset area as a region area and is used as the region to be explored.
The device according to the present invention further comprises a shape of any one of the following: circular, rectangular or polygonal.
The device according to the invention further comprises: and the type recognition unit is used for recognizing the type of the obstacle after judging that the obstacle exists in the area to be searched through the image information, and recognizing and confirming the type of the obstacle again within a certain distance range from the detected obstacle when the vehicle walks in the searched area.
According to a third aspect of the present invention, there is provided a self-propelled robot comprising a processor and a memory connected to the processor, the memory storing instructions that are loadable by the processor and adapted to carry out the method described above.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon instructions which are loadable by a processor and adapted to cause execution of the above-mentioned method according to the present invention.
Advantageous effects
According to the invention, whether the preset area to be explored has the obstacle or not is predetermined, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when meeting the obstacle in the cleaning process, thereby avoiding colliding with the obstacle and improving the customer experience. In addition, according to the present invention, it is possible to recognize an obstacle outside the area to be searched, and it is possible to confirm the obstacle several times at the center point thereof after determining the next area to be searched although the obstacle is farther from the self-propelled robot and the recognition rate is worse, thereby improving the recognition rate of the obstacle.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated.
Fig. 1 is a schematic view of an application scenario provided by an embodiment of the present disclosure;
FIG. 2 is a perspective view of an automatic cleaning device according to an embodiment of the present disclosure;
FIG. 3 is a top view of an automatic cleaning apparatus according to an embodiment of the present disclosure;
FIG. 4 is a bottom view of an automatic cleaning device according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating a method for controlling an automatic cleaning apparatus according to an embodiment of the disclosure;
fig. 6a and 6b are schematic diagrams of a method for determining a center of an area according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a control device of an automatic cleaning apparatus according to an embodiment of the disclosure;
fig. 8 is an electronic structural schematic diagram of a robot provided in the embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present disclosure, these … … should not be limited to these terms. These terms are used only to distinguish … … from each other. For example, the first … … can also be referred to as the second … … and, similarly, the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present disclosure.
The disclosed embodiments provide a possible application scenario that includes a self-propelled robot, such as a sweeping robot, a mopping robot, a vacuum cleaner, a weeding machine, and the like. In this embodiment, as shown in fig. 1, a household sweeping robot 100 is taken as an example for explanation, during a working process of the sweeping robot, the sweeping robot can perform sweeping in a built-in map of the robot according to a preset route or an automatically planned route, and during the sweeping process, the self-propelled robot can detect an obstacle that may be encountered in a current working path in real time and perform a corresponding obstacle avoidance operation. In order to find out obstacles in a cleaning area in time, the robot needs to search a small part of the cleaning area in real time in the cleaning process, perform obstacle detection and cleaning, perform the same step to clean another area after the cleaning of the area is finished, and repeat the operation to complete the cleaning of all the areas to be cleaned. In the cleaning process, when the obstacle existing in the current cleaning area is detected, the obstacle is marked and obstacle avoidance is carried out when the area is cleaned subsequently. In this embodiment, the robot may be provided with a touch-sensitive display or controlled by a mobile terminal to receive an operation instruction input by a user. The automatic cleaning equipment can be provided with various sensors, such as sensing devices such as a buffer, a cliff sensor, an ultrasonic sensor, an infrared sensor, a magnetometer, an accelerometer, a gyroscope, a speedometer and the like (the specific structure of each sensor is not described in detail, and any one of the sensors can be applied to the automatic cleaning equipment), and the robot can also be provided with a wireless communication module such as a WIFI module and a Bluetooth module to be connected with an intelligent terminal or a server and receive an operation instruction transmitted by the intelligent terminal or the server through the wireless communication module.
As shown in fig. 2, the sweeping robot 100 can travel over the ground through various combinations of movements relative to the following three mutually perpendicular axes defined by the body 110: a front-back axis X, a lateral axis Y, and a central vertical axis Z. The forward driving direction along the forward-rearward axis X is denoted as "forward", and the rearward driving direction along the forward-rearward axis X is denoted as "rearward". The direction of the transverse axis Y is essentially the direction extending between the right and left wheels of the robot along the axis defined by the center points of the drive wheel modules 141.
The sweeping robot 100 may rotate about the Y-axis. The "pitch up" is when the forward portion of the sweeping robot 100 is tilted up and the rearward portion is tilted down, and the "pitch down" is when the forward portion of the sweeping robot 100 is tilted down and the rearward portion is tilted up. In addition, the robot 100 may rotate about the Z-axis. In the forward direction of the sweeping robot 100, when the sweeping robot 100 tilts to the right of the X axis, it turns to the right, and when the sweeping robot 100 tilts to the left of the X axis, it turns to the left.
As shown in fig. 3, the sweeping robot 100 includes a robot body 110, a sensing system 120, a control system, a driving system 140, a cleaning system, an energy system, and a human-computer interaction system 180.
The machine body 110 includes a forward portion 111 and a rearward portion 112 having an approximately circular shape (circular front to rear), and may have other shapes including, but not limited to, an approximately D-shape with a front to rear circle, and a rectangular or square shape with a front to rear.
As shown in fig. 3, the sensing system 120 includes a position determining device 121 located on the machine body 110, a collision sensor and a proximity sensor provided on a bumper 122 of the forward portion 111 of the machine body 110, a cliff sensor provided on a lower portion of the machine body, and sensing devices such as a magnetometer, an accelerometer, a gyroscope (Gyro), an odometer (odograph) and the like provided inside the machine body, and is configured to provide various position information and motion state information of the machine to the control system 130. The position determining device 121 includes, but is not limited to, a camera, a Laser Direct Structuring (LDS).
As shown in fig. 3, the forward portion 111 of the machine body 110 may carry a bumper 122, and the bumper 122 may detect one or more events in the travel path of the sweeping robot 100 via a sensor system disposed thereon, such as an infrared sensor, when the driving wheel module 141 propels the robot to walk on the floor during cleaning, and the sweeping robot 100 may control the driving wheel module 141 to cause the sweeping robot 100 to respond to the events, such as moving away from an obstacle, by detecting the events, such as an obstacle and a wall, by the bumper 122.
The control system 130 is disposed on a circuit board in the machine body 110, And includes a non-transitory memory, such as a hard disk, a flash memory, And a random access memory, a communication computing processor, such as a central processing unit, And an application processor, And the application processor draws an instant map of the environment where the robot is located by using a positioning algorithm, such as instant positioning And Mapping (SLAM), according to the obstacle information fed back by the laser distance measuring device. And the distance information and speed information fed back by the sensors such as the sensor, the cliff sensor, the magnetometer, the accelerometer, the gyroscope, the odometer and the like arranged on the buffer 122 are combined to comprehensively judge the current working state and position of the sweeper, the current pose of the sweeper, such as passing a threshold, getting a carpet, being positioned at the cliff, being blocked above or below, being full of dust boxes, being taken up and the like, and specific next-step action strategies can be provided according to different conditions, so that the work of the robot can better meet the requirements of an owner, and better user experience can be achieved.
As shown in fig. 4, the drive system 140 may steer the robot 100 across the ground based on drive commands having distance and angle information (e.g., x, y, and theta components). The drive system 140 includes a drive wheel module 141, and the drive wheel module 141 can control both the left and right wheels, and in order to more precisely control the motion of the machine, it is preferable that the drive wheel module 141 includes a left drive wheel module and a right drive wheel module, respectively. The left and right drive wheel modules are opposed along a transverse axis defined by the body 110. In order for the robot to be able to move more stably or with greater mobility over the ground, the robot may include one or more driven wheels 142, including but not limited to universal wheels. The driving wheel module comprises a traveling wheel, a driving motor and a control circuit for controlling the driving motor, and can also be connected with a circuit for measuring driving current and a milemeter. The driving wheel module 141 may be detachably coupled to the main body 110 to facilitate disassembly and maintenance. The drive wheel may have a biased drop-type suspension system movably secured, e.g., rotatably attached, to the robot body 110 and receiving a spring bias biased downward and away from the robot body 110. The spring bias allows the drive wheels to maintain contact and traction with the floor with a certain landing force while the cleaning elements of the sweeping robot 100 also contact the floor 10 with a certain pressure.
The cleaning system may be a dry cleaning system and/or a wet cleaning system. As a dry cleaning system, the main cleaning function is derived from the sweeping system 151 constituted by the roll brush, the dust box, the blower, the air outlet, and the connecting members therebetween. The rolling brush with certain interference with the ground sweeps the garbage on the ground and winds the garbage to the front of a dust suction opening between the rolling brush and the dust box, and then the garbage is sucked into the dust box by air which is generated by the fan and passes through the dust box and has suction force. The dry cleaning system may also include an edge brush 152 having an axis of rotation that is angled relative to the floor for moving debris into the roller brush area of the cleaning system.
Energy systems include rechargeable batteries, such as nickel metal hydride batteries and lithium batteries. The charging battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit and a battery under-voltage monitoring circuit, and the charging control circuit, the battery pack charging temperature detection circuit and the battery under-voltage monitoring circuit are connected with the single chip microcomputer control circuit. The host computer is connected with the charging pile through the charging electrode arranged on the side or the lower part of the machine body for charging. If dust is attached to the exposed charging electrode, the plastic body around the electrode is melted and deformed due to the accumulation effect of electric charge in the charging process, even the electrode itself is deformed, and normal charging cannot be continued.
The human-computer interaction system 180 comprises keys on a host panel, and the keys are used for a user to select functions; the machine control system can further comprise a display screen and/or an indicator light and/or a loudspeaker, wherein the display screen, the indicator light and the loudspeaker show the current state or function selection item of the machine to a user; and a mobile phone client program can be further included. For the path navigation type automatic cleaning equipment, a map of the environment where the equipment is located and the position of a machine can be displayed to a user at a mobile phone client, and richer and more humanized function items can be provided for the user.
Fig. 5 is a schematic illustration of an embodiment of a method according to the invention. Referring to fig. 5, the self-walking robot control method according to this embodiment includes the steps of:
s502, selecting a position as a central point of an area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed;
s504, moving to the central point to search the obstacles around, recording the positions of the detected obstacles, and completing the search of the current area to be searched;
and S506, repeatedly executing all the steps until all the areas related to the task are searched.
In one or more specific embodiments, according to a preset area, a preset area with a geometric center on a boundary of an explored area, the preset area overlapping the explored area and having a minimum overlapping area is selected as a to-be-explored area on the boundary, and the geometric center of the to-be-explored area is used as the central point. The predetermined area may be determined by means of a predetermined diameter or radius. The predetermined area is determined so as not to exceed the detection range of the self-propelled robot (built-in image acquisition device). According to the above-described embodiment, since the center point is located at the boundary of the searched area, the preset area as the area to be searched inevitably overlaps the searched area, and the searched area can ensure that the center point on the boundary has no obstacle to be recognized, the self-propelled robot can ensure that the obstacle is not hit while moving to the center point, and in addition, the miss-sweeping area is not left during the cleaning process, so that the miss-sweeping phenomenon can be ensured not to occur.
In one or more other specific embodiments, when a task is started to be executed, a point is arbitrarily selected as a central point of a region to be explored, and a preset region is constructed by taking a preset area as a region area and is used as the region to be explored.
In one or more specific embodiments, the preset area may be any one of the following shapes: circular, rectangular or polygonal, preferably circular. The center of the area is the geometric center of the preset area, for example, the center of a circle of a circular area is the geometric center, and the diagonal intersection of a rectangle or a polygon is the geometric center.
In one or more embodiments, walking may begin within the explored area traversing the explored area after moving to the center point to explore obstacles around; or after the obstacle exploration is finished on all the areas related to the task, the user starts to walk to traverse all the areas related to the task; or if the area related to the task comprises more than 2 partitions, walking can be started after the exploration is finished in each pair of partitions, and the current partition is traversed; after the exploration and walking of the current subarea are finished, the exploration and walking of the next subarea are started until all areas related to the task are traversed; and performing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
Fig. 1 is a schematic diagram illustrating the step of determining the area to be searched in the method of the present invention by taking a circle as an example. As shown in fig. 1, 200 indicates a searched area (dark gray portion), and 201 indicates an unexplored area (blank portion). According to the present embodiment, an explored area 200 is first recorded on a map, and then a position on a boundary between the explored area 200 and an unexplored area 201 is determined as a center point and a preset radius is determined to determine a circular area to be explored 202. The area to be explored 202 overlaps the explored area 200.
In one or more specific embodiments, the area to be explored may be an area that overlaps the explored area with the smallest overlapping area. In this way, the area of the area where the area to be searched and the searched area overlap can be reduced as much as possible, thereby reducing the time for repeating cleaning and improving the cleaning efficiency.
Method for determining the center point for the cleaning robot referring specifically to fig. 6, it can be seen from comparing fig. 6a and 6B that the center points a and B are on the boundary between the explored area and the unexplored area, respectively, but for the circular preset area, it is obvious that the area to be explored overlaps the explored area in fig. 6B and the overlapping area is small. And moving the central point on the boundary between the explored area and the unexplored area, and the like, and finding out the point which is overlapped on the boundary between the explored area and the unexplored area and has the minimum overlapped area as the central point.
As will be appreciated in conjunction with fig. 6a and 6b, in one or more embodiments, the moving to the center point to perform obstacle detection for the surrounding may include: and moving to the central point and performing self-rotation to obtain image information in the area to be searched, and judging whether an obstacle exists in the area to be searched or not according to the image information. The image information is acquired by the image acquisition device. The image acquisition device may be one or more optical, electrical or acoustic image acquisition devices. Examples of optical, electrical or acoustic image acquisition devices include optical cameras, laser, infrared or ultrasonic detectors or sensors, and the like. In a particular embodiment, an optical camera is used as the optical, electrical or acoustic image acquisition device. In the case where the optical, electrical, or acoustic image acquiring means is plural, the plural optical, electrical, or acoustic image acquiring means may be disposed uniformly or symmetrically around the central point inside the self-propelled robot. One or more image acquisition devices may be built into the self-propelled robot.
According to the invention, the rotation may be a continuous rotation or may be a pause per predetermined angle of rotation. In a more specific embodiment, the angle of rotation may be at least one revolution, such as 360 degrees. In another more specific embodiment, the angle may be 720 degrees. In other one or more embodiments, the self-walking robot may rotate at the central point by less than one rotation as long as the detected angle superposition of the one or more optical, electrical or acoustic image capturing devices of the self-walking robot is at least 360 degrees. For example, when the self-propelled robot incorporates 1 optical camera with a field angle of 120 degrees, the optical camera can be rotated by 240 degrees to realize a detection angle of 360 degrees.
In another embodiment, the self-walking robot may not even rotate at the central point, as long as the detected angle of the one or more optical, electrical or acoustic image acquisition means is at least 360 degrees. For example, in the case where the self-propelled robot incorporates 3 optical cameras having a field angle of 120 degrees symmetrically around the center point (the field angles of the 3 cameras are superimposed by exactly 360 degrees), the self-propelled robot may not rotate.
In one or more embodiments, the self-walking robot pauses for a predetermined time every rotation of a predetermined angle to be detected by the one or more optical, electrical or acoustic image acquiring means. The predetermined angle may be, for example, 15 degrees, 30 degrees, 45 degrees, 90 degrees, 120 degrees, etc. The predetermined time may be, for example, several seconds, several tens of seconds, or several minutes, etc. The detection at the stop position, for example, photographing, may be performed once or may be performed a plurality of times. In this way, it can be further ensured that the obstacles existing around the self-propelled robot are detected.
In one or more specific embodiments, the one or more optical, electrical, or acoustic image acquisition devices may be one or more optical cameras, and the predetermined angle is less than or equal to a field angle of the one or more optical cameras. Specifically, the predetermined angle may be, for example, 90 degrees, and the camera angle may be, for example, 120 degrees.
In one or more specific embodiments, the walking motion traverses areas within the area to be explored other than the walked area. In this way, the cleaning time can be further reduced, and the cleaning efficiency can be improved.
In one or more embodiments, selecting a location as a center point of an area to be explored is performed repeatedly; and moving to the central point to search for the obstacles around, so as to record the positions of the detected obstacles and mark the areas to be searched as searched areas until the obstacle search is completed for all the areas related to the task.
According to an embodiment of the present invention, if the self-walking robot performs the first cleaning, the self-walking robot does not previously store a map, and if the self-walking robot performs the non-first cleaning, the self-walking robot previously stores an old map before the non-first cleaning or the first cleaning, and thus when the self-walking robot performs the non-first cleaning, the old map is updated according to a new map that is currently formed.
In one or more specific embodiments, the method further comprises: and after judging that the obstacle exists in the area to be searched through the image information, identifying the type of the obstacle. And when walking in the searched area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
As an example, after determining that there is an obstacle in the area to be searched through the image information, performing a primary type identification process on the obstacle as follows: can obtain barrier information through image acquisition device, advance the in-process and obtain image information in real time, when image information satisfies preset model condition, confirm that there is the barrier in the current position to carry out the category sign according to preset model, specifically include: the robot acquires image information in the process of traveling in real time in the process of traveling, when an obstacle image exists, the obstacle image is compared with an image model trained by the robot, the recognized obstacle is classified according to the comparison result, for example, when an image of 'shoes' is shot, the image is matched with a plurality of types of models stored by the robot, and when the proportion of the 'shoes' matched is higher, the image is classified into the shoes. Of course, if the image recognition by the obstacle image is not clear, the obstacle can be image-recognized from a plurality of angles, for example, when the probability of recognizing the front group of objects as a wire group is 80%, the probability of recognizing the front group of objects as an excrement is 70%, and the probability of recognizing the front group of objects as a ball is 75%, and the three probabilities are relatively close to each other and difficult to classify, the robot can select to acquire image information again from another angle and perform secondary recognition until the objects can be recognized as a certain class with a large probability difference, for example, the probability of recognizing the objects as a wire group is 80%, and the probabilities of recognizing the objects as other obstacles are all 50% or less, and then the obstacles are classified as a wire group class.
During the cleaning process, the obstacle category is secondarily confirmed, specifically, when the robot walks in the searched area, the type of the obstacle is identified again within a certain distance range from the detected obstacle (in this case, the range is already relatively close, for example, 20 cm). Specifically, the method comprises the steps of acquiring image information of an obstacle at a close position in the process of traveling, comparing the image of the obstacle with an image model trained by the robot, classifying the recognized obstacle according to a comparison result, for example, when an image of 'shoes' is shot, matching the image with a plurality of types of models stored by the robot, and classifying the model as the shoes when the proportion of the 'shoes' is higher. After the secondary confirmation, the obstacle is identified as an obstacle of a certain type.
According to the invention, whether the preset area to be explored has the obstacle or not is predetermined, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when meeting the obstacle in the cleaning process, thereby avoiding colliding with the obstacle and improving the customer experience. In addition, according to the present invention, it is possible to recognize an obstacle outside the area to be searched, and it is possible to confirm the obstacle several times at the center point thereof after determining the next area to be searched although the obstacle is farther from the self-propelled robot and the recognition rate is worse, thereby improving the recognition rate of the obstacle.
As shown in fig. 7, according to a second aspect of the present invention, there is provided a self-walking robot control device including:
a central point selecting unit 702, configured to select a position as a central point of an area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed;
an obstacle search recording unit 704 configured to move to the central point to search for obstacles around, thereby recording positions of detected obstacles and marking the area to be searched as a searched area;
a repeated execution unit 706, configured to repeatedly execute all the above steps until the obstacle search is completed for all the areas involved in the task.
In one or more specific embodiments, the apparatus may further include: and the obstacle avoidance operation execution unit is used for executing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
In one or more specific embodiments, the apparatus may further include: and the walking area marking unit is used for marking the area which is walked by the self-walking robot as a walking area in the process of executing the task.
In one or more specific embodiments, the apparatus may further include: and the type recognition unit is used for recognizing the type of the obstacle after judging that the obstacle exists in the area to be searched through the image information, and recognizing and confirming the type of the obstacle again within a certain distance range from the detected obstacle when the vehicle walks in the searched area.
The self-walking robot control device according to the present invention is used for implementing the self-walking robot control method according to the above embodiments, and the same technical features have the same technical effects, and will not be described herein again.
According to the invention, whether the preset area to be explored has the obstacle or not is predetermined, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when meeting the obstacle in the cleaning process, thereby avoiding colliding with the obstacle and improving the customer experience. In addition, according to the present invention, it is possible to recognize an obstacle outside the area to be searched, and it is possible to confirm the obstacle several times at the center point thereof after determining the next area to be searched although the obstacle is farther from the self-propelled robot and the recognition rate is worse, thereby improving the recognition rate of the obstacle.
The disclosed embodiments provide a non-transitory computer readable storage medium storing computer program instructions which, when invoked and executed by a processor, implement the method steps as recited in any of the above.
The disclosed embodiment provides a robot, comprising a processor and a memory, wherein the memory stores computer program instructions capable of being executed by the processor, and the processor executes the computer program instructions to realize the method steps of any one of the foregoing embodiments.
As shown in fig. 8, the robot may include a processing device (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage device 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic robot 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, a hard disk; and a communication device 809. The communication means 809 may allow the electronic robot to perform wireless or wired communication with other robots to exchange data. While fig. 8 illustrates an electronic robot having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flow diagram may be implemented as a robot software program. For example, embodiments of the present disclosure include a robot software program product comprising a computer program embodied on a readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the robot; or may be separate and not assembled into the robot.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A self-walking robot control method, characterized by comprising:
selecting a position as a central point of an area to be explored; wherein at least part of the area to be explored is an area which is not explored yet by the self-walking robot when the task is executed;
moving to the central point to search for obstacles around, thereby recording the positions of the detected obstacles and marking the area to be searched as a searched area;
and repeatedly executing all the steps until the obstacle search is completed for all the areas related to the task.
2. The method of claim 1, wherein said moving to said center point for obstacle exploration around comprises:
and moving to the central point and performing self-rotation to obtain image information in the area to be searched, and judging whether an obstacle exists in the area to be searched or not according to the image information.
3. The method of claim 2, wherein the moving to the center point and spinning to obtain image information within the area to be explored comprises:
moving to the central point and continuously rotating for at least one circle to acquire image information in the area to be explored; alternatively, the first and second electrodes may be,
moving to the central point and stopping at the central point every time of rotating a preset angle to acquire image information in the area to be explored.
4. The method according to claim 3, wherein the predetermined angle is equal to or smaller than a field angle of an apparatus for acquiring the image information.
5. The method of claim 1, further comprising:
after the obstacle is moved to the central point to explore the obstacles around, walking is started in the explored area, the explored area is traversed, and obstacle avoidance operation is executed when the obstacle is detected again in the walking traversal process; or
After the obstacle exploration is finished for all the areas related to the task, the user starts to walk, traverses all the areas related to the task, and executes obstacle avoidance operation when the obstacles are detected again in the walking traversal process; or
If the area related to the task comprises more than 2 subareas, walking is started after the exploration is finished in each pair of subareas, and the current subarea is traversed; and after the exploration and walking of the current subarea are finished, exploring and walking the next subarea until all areas related to the task are traversed, and executing obstacle avoidance operation when the obstacle is detected again in the walking and traversing process.
6. The method of claim 5, further comprising:
in the process of executing the task, marking the area which is already walked by the self-walking robot as a walked area;
the walking in the explored area specifically comprises: and the walking action at least traverses the areas except the walking area in the area to be explored.
7. The method of claim 1,
the selecting a position as a central point of an area to be explored specifically includes:
according to a preset area, selecting a preset area with a geometric center on the boundary of the searched area, overlapping the searched area and having the smallest overlapping area as a to-be-searched area on the boundary, and taking the geometric center of the to-be-searched area as the central point.
8. The method according to claim 1 or 7, wherein the selecting a location as a center point of an area to be explored specifically comprises:
when a task is started to be executed, a point is selected as a central point of a region to be explored at will, and a preset region is constructed by taking a preset area as a region area and is used as the region to be explored.
9. The method of claim 1, wherein the area to be explored comprises any one of the following shapes: circular, rectangular or polygonal.
10. The method of claim 5 or 6, further comprising:
after judging that the obstacle exists in the area to be explored through the image information, carrying out type recognition on the obstacle;
and when walking in the searched area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
CN202010382048.9A 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium Active CN113625700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010382048.9A CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010382048.9A CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Publications (2)

Publication Number Publication Date
CN113625700A true CN113625700A (en) 2021-11-09
CN113625700B CN113625700B (en) 2024-07-02

Family

ID=78377163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010382048.9A Active CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Country Status (1)

Country Link
CN (1) CN113625700B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355934A (en) * 2021-12-31 2022-04-15 南京苏美达智能技术有限公司 Obstacle avoidance method and automatic walking equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104865965A (en) * 2015-05-20 2015-08-26 深圳市锐曼智能装备有限公司 Barrier-avoidance control method combining depth camera and supersonic wave for robot and system thereof
CN105467992A (en) * 2015-11-20 2016-04-06 纳恩博(北京)科技有限公司 Method and apparatus for determining path of mobile electronic equipment
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
WO2018214825A1 (en) * 2017-05-26 2018-11-29 杭州海康机器人技术有限公司 Method and device for assessing probability of presence of obstacle in unknown position
CN110488809A (en) * 2019-07-19 2019-11-22 上海景吾智能科技有限公司 A kind of indoor mobile robot independently builds drawing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104865965A (en) * 2015-05-20 2015-08-26 深圳市锐曼智能装备有限公司 Barrier-avoidance control method combining depth camera and supersonic wave for robot and system thereof
CN105467992A (en) * 2015-11-20 2016-04-06 纳恩博(北京)科技有限公司 Method and apparatus for determining path of mobile electronic equipment
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
WO2018214825A1 (en) * 2017-05-26 2018-11-29 杭州海康机器人技术有限公司 Method and device for assessing probability of presence of obstacle in unknown position
CN110488809A (en) * 2019-07-19 2019-11-22 上海景吾智能科技有限公司 A kind of indoor mobile robot independently builds drawing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355934A (en) * 2021-12-31 2022-04-15 南京苏美达智能技术有限公司 Obstacle avoidance method and automatic walking equipment

Also Published As

Publication number Publication date
CN113625700B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
US20230225576A1 (en) Obstacle avoidance method and apparatus for self-walking robot, robot, and storage medium
CN111990929B (en) Obstacle detection method and device, self-walking robot and storage medium
CN109947109B (en) Robot working area map construction method and device, robot and medium
CN106200645B (en) Autonomous robot, control device, and control method
EP4137905A1 (en) Robot obstacle avoidance method, device, and storage medium
CN110623606B (en) Cleaning robot and control method thereof
CN108852174B (en) Autonomous mobile robot and pile searching method, control device and intelligent cleaning system thereof
CN108873879B (en) Autonomous mobile robot and pile searching method, control device and intelligent cleaning system thereof
CN111990930B (en) Distance measuring method, distance measuring device, robot and storage medium
CN110136704B (en) Robot voice control method and device, robot and medium
CN114601399B (en) Control method and device of cleaning equipment, cleaning equipment and storage medium
CN114595354A (en) Robot mapping method and device, robot and storage medium
CN113625700B (en) Self-walking robot control method, device, self-walking robot and storage medium
CN112022026A (en) Self-propelled robot and obstacle detection method
EP4332501A1 (en) Distance measurement method and apparatus, and robot and storage medium
CN114879691A (en) Control method for self-propelled robot, storage medium, and self-propelled robot
CN217982190U (en) Self-walking equipment
CN114601373B (en) Control method and device of cleaning robot, cleaning robot and storage medium
CN213216762U (en) Self-walking robot
WO2024140195A1 (en) Self-propelled device obstacle avoidance method and apparatus based on line laser, and device and medium
CN114610013A (en) Obstacle-encountering processing method and device for self-walking robot, robot and storage medium
CN116149307A (en) Self-walking equipment and obstacle avoidance method thereof
CN116392043A (en) Self-moving cleaning device, control method and device thereof and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220418

Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing

Applicant after: Beijing Stone Innovation Technology Co.,Ltd.

Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085

Applicant before: Beijing Roborock Technology Co.,Ltd.

GR01 Patent grant