CN113524265B - Robot anti-falling method, robot and readable storage medium - Google Patents

Robot anti-falling method, robot and readable storage medium Download PDF

Info

Publication number
CN113524265B
CN113524265B CN202110886723.6A CN202110886723A CN113524265B CN 113524265 B CN113524265 B CN 113524265B CN 202110886723 A CN202110886723 A CN 202110886723A CN 113524265 B CN113524265 B CN 113524265B
Authority
CN
China
Prior art keywords
robot
falling
safety
scene
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110886723.6A
Other languages
Chinese (zh)
Other versions
CN113524265A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tangen Intelligent Technology Changshu Co ltd
Original Assignee
Tangen Intelligent Technology Changshu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tangen Intelligent Technology Changshu Co ltd filed Critical Tangen Intelligent Technology Changshu Co ltd
Priority to CN202110886723.6A priority Critical patent/CN113524265B/en
Priority to CN202310454027.7A priority patent/CN116372990A/en
Publication of CN113524265A publication Critical patent/CN113524265A/en
Application granted granted Critical
Publication of CN113524265B publication Critical patent/CN113524265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to the field of robots and discloses a robot anti-falling method, a robot and a readable storage medium. According to the robot anti-falling method, the distance between the robot and the supporting surface, which is detected by the anti-falling sensor, can be obtained, whether the distance meets the preset anti-falling condition is judged, when the distance meets the preset anti-falling condition, whether the robot is in a safe area where falling cannot occur is determined according to the environment, and a motion stopping instruction is sent to the robot under the condition that the robot is not in the safe area. Therefore, when the risk that the robot possibly falls is determined according to the distance data provided by the anti-falling sensor, the falling risk of the robot is confirmed for the second time, misjudgment of the falling risk in a specific scene is avoided, and the running efficiency of the robot is improved.

Description

Robot anti-falling method, robot and readable storage medium
Technical Field
The present disclosure relates to the field of robots, and in particular, to a robot anti-falling method, a robot, and a readable storage medium.
Background
With the development of robot technology, various robot products such as cleaning robots, security robots, inspection robots, distribution robots and the like are developed, and in order to ensure that the robots can safely and reliably work in application scenes, the robots need to have the capability of sensing surrounding environments and further avoiding scenes where stairs, platforms and the like may fall.
Currently, robots generally sense the surrounding environment by using different types of sensors, such as infrared sensors, ultrasonic sensors, laser sensors, etc., which sense the environment by emitting energy such as infrared rays, ultrasonic waves, laser light, etc., to the external environment and receiving the reflected energy. These sensors can have perception problems in certain situations, such as those shown in fig. 1-4, where fig. 1-2 show the drain grating situation, fig. 3 shows the black light absorbing carpet situation, and fig. 4 shows the transparent glass floor situation, where the sensors are likely to be unable to accurately perceive, resulting in a false positive of the risk of the robot falling. For example, when the laser sensor senses the drain grating, the laser sensor cannot accurately judge the distance between the robot and the drain grating because the laser possibly enters a gap of the drain grating so that the laser sensor cannot receive reflected laser, and the laser sensor can misjudge the drain grating as a region where the robot falls.
Disclosure of Invention
The embodiment of the application provides a robot anti-falling method, a robot and a readable storage medium, which are used for solving the problem that anti-falling judgment of the robot is easy to misjudge in a specific scene.
In a first aspect, an embodiment of the present application provides a robot anti-falling method, including:
acquiring the distance between the robot detected by the anti-falling sensor and the supporting surface;
judging whether the distance meets preset anti-falling conditions or not, wherein the anti-falling conditions are used for describing the distance range of the robot which is likely to fall down;
when the distance meets the preset anti-falling condition, determining whether the robot is in a safe area where falling cannot occur according to the environment;
in the case where the robot is not in the safe area, a stop motion instruction is sent to the robot.
In one possible implementation of the first aspect, determining whether the robot is in a safe area where no drop occurs according to the environment includes:
determining whether the robot is in a safe area where falling cannot occur or not by acquiring an environment image; or (b)
And determining whether the robot is in a safe area where falling cannot occur or not through the safety area calibrated in the robot positioning and scene map.
In one possible implementation of the first aspect, determining whether the robot is in a safe area where no drop occurs by acquiring an environmental image includes:
Acquiring an image shot by a camera associated with the movement direction of the robot;
identifying an image, and judging whether a result of image identification meets a preset safety scene condition, wherein the safety scene condition is used for describing a scene that the robot cannot fall down;
and when the result of the image recognition does not meet the preset safety scene condition, determining that the robot is not in the safety area.
In a possible implementation of the first aspect, identifying the image includes:
and identifying an image through a preset neural network identification model, and acquiring an image identification result, wherein the image identification result comprises an object or an area.
In a possible implementation of the first aspect, the security scene condition includes at least one of: drainage grating, black light absorbing area and transparent floor area.
In one possible implementation of the first aspect, determining whether the robot is in a safe area where no drop occurs through the positioning of the robot and a safe area calibrated in a scene map includes:
acquiring the position of the robot in the current scene;
judging whether the position is in a safety area marked in the scene map;
when the position is not in the safety area calibrated in the scene map, the robot is determined to be not in the safety area.
In a possible implementation of the first aspect, determining whether the robot is located before the safety area where the drop will not occur according to the environment further includes:
checking whether a valid security signal is present;
when a valid safety signal is present, it is determined that the robot is in a safe area.
In a possible implementation of the first aspect, the method further includes:
and under the condition that the robot is in a safe area, continuing to execute the motion instruction and setting a safety signal. In a second aspect, the present application provides a robot, including:
a memory for storing instructions for execution by the first processor or the second processor, an
The first processor is one of a plurality of processors of the robot and is used for acquiring the distance between the robot detected by the anti-falling sensor and the supporting surface and judging whether the distance meets the preset anti-falling condition, wherein the anti-falling condition is used for describing the range of the distance of the robot which is likely to fall;
the second processor is one of a plurality of processors of the robot and is used for determining whether the robot is in a safe area where falling cannot occur or not according to the environment when the distance meets the preset falling prevention condition and sending a motion stopping instruction to the robot when the robot is not in the safe area;
At least one anti-falling sensor for detecting the distance between the robot and the supporting surface;
the motion component is used for receiving the motion stopping instruction sent by the second processor and stopping the motion of the robot according to the motion stopping instruction; and receiving a motion instruction sent by the second processor and continuing the motion of the robot according to the motion instruction.
In a third aspect, embodiments of the present application provide a readable storage medium having stored thereon instructions that, when executed on a robot, cause the robot to perform the robot anti-drop method of the first aspect and any of the various possible implementations of the first aspect.
According to the robot anti-falling method, when the risk of possible falling of the robot is determined according to the distance data provided by the anti-falling sensor, an environment image in the moving direction of the robot is shot through a camera, and an object or an area in the image is identified; or whether the robot is in the safety area is judged by judging whether the robot is in the safety area in the scene map or not through the positioning of the robot, so that the second confirmation of the falling risk of the robot is realized, the misjudgment of the falling risk in a specific scene is avoided, the running efficiency of the robot is improved, and the working range of the robot is enlarged.
Drawings
Fig. 1 shows a schematic view of a drain grating.
Fig. 2 shows a schematic view of another drain grating.
Fig. 3 shows a schematic view of a black light-absorbing carpet.
Fig. 4 shows a schematic view of a transparent glass floor.
Fig. 5 illustrates a schematic view of a scene of a robot as it moves, according to some embodiments of the present application.
Fig. 6 illustrates a hardware block diagram of a robot, according to some embodiments of the present application.
Fig. 7 illustrates a flow chart of a method of robotic fall protection according to some embodiments of the present application.
Fig. 8 illustrates a flow chart of another robotic fall arrest method, according to some embodiments of the present application.
Detailed Description
Illustrative embodiments of the present application include, but are not limited to, robotic fall arrest methods, robots, and readable storage media.
It is to be appreciated that as used herein, the term module may refer to or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality.
It is to be appreciated that in various embodiments of the present application, the processor may be a microprocessor, a digital signal processor, a microcontroller, or the like, and/or any combination thereof. According to another aspect, the processor may be a single core processor, a multi-core processor, or the like, and/or any combination thereof.
It can be appreciated that the robot anti-falling method of the present application is applicable to a variety of robots that need to operate in different work scenarios.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 5 provides a scenario in which a drop-proof determination is made as the robot 100 moves toward the target area 200, according to some embodiments of the present application. As shown in fig. 1, the robot 100 moves toward the target area 200, and the anti-drop sensor of the robot 100 continuously detects the distance between the robot 100 and a supporting surface, which is a plane or inclined surface or the like that can provide support for the robot 100, such as a floor, stairs, or the like. In the target area, the value detected by the fall arrest sensor of the robot 100 is a large value, which means that the robot may be a large distance from the bottom of the area. When the value satisfies a preset fall protection condition, the robot 100 determines that there is a risk of falling, and may stop advancing to prevent falling.
The drop sensor of the present application may be a sensor dedicated to detecting the distance between the robot 100 and the support surface, or may be a part of functional components for detecting the distance between the robot and the support surface in a sensor combined with the forward obstacle sensor.
Here, the fall prevention sensor of the robot 100 may be, for example, an infrared sensor, an ultrasonic sensor, a laser sensor, or the like.
The target area 200 may be a drain grating, a transparent glass floor, a black light absorbing carpet, or the like.
The present application will be described in more detail below with respect to an example in which the fall protection sensor of the robot 100 is an infrared sensor and the target area 200 is a black light absorbing carpet.
The infrared sensor of the robot 100 continuously emits infrared rays toward the front ground while in motion, the ground reflects the infrared rays, and the reflected infrared rays are received by the infrared sensor for calculating the distance between the infrared sensor and the front ground. After the infrared rays emitted from the infrared sensor of the robot 100 are incident on the black light-absorbing carpet 200, since the black light-absorbing carpet 200 absorbs a large amount of infrared rays and the infrared sensor does not substantially receive the reflected infrared rays, the infrared sensor erroneously determines the distance between the infrared sensor and the black light-absorbing carpet 200 as a large number, so that the robot 100 erroneously determines the black light-absorbing carpet 200 as an area where the robot 100 falls down according to the erroneous distance provided by the infrared sensor, and the robot 100 stops moving in the direction of the black light-absorbing carpet 200.
In fact, the black light-absorbing carpet 200 is not a region where the robot 100 falls, and the drop-preventing sensor of the robot 100 is misjudged. If the working range of the robot 100 is unnecessarily limited only according to the distance data provided by the anti-drop sensor, the robot 100 needs to further determine whether the robot 100 is currently in a safe area according to the location or environment of the robot 100. When it is judged that the robot 100 is found to be located in the safety area, the robot 100 continues to move forward into the target area 200. When it is judged that the robot 100 is not located in the safety area, the robot 100 stops advancing to prevent falling.
To check if the current environment will lead to a possible drop situation. The robot 100 collects an image of the black light-absorbing carpet 200 through the camera and performs object recognition, and after the black light-absorbing carpet is included in the image is recognized, the robot 100 determines that the current scene is a safety scene, and the robot 100 may continue to advance toward the black light-absorbing carpet 200.
If the robot is judged whether to have the risk of falling only according to the detection distance of the anti-falling sensor, the anti-falling sensor cannot accurately judge the distance when encountering the black light absorption carpet and other areas, so that the misleading robot makes wrong judgment, the working range of the robot is reduced, meanwhile, unnecessary movement distance is increased for bypassing the misjudging area, and the working efficiency of the robot is reduced. As are the drain grating, clear glass floor scenarios. And will not be described in detail herein.
According to the method provided by the technical scheme, the anti-falling sensor and the target area are used for making anti-falling according to the distance data of the anti-falling sensor and the target area, whether the current scene is a safety scene or not is further confirmed, if the current scene is the safety scene, the robot can continue to move to the target area, so that the influence of misjudgment of the anti-falling sensor on the movement of the robot can be reduced, the working range of the robot is enlarged, and the working efficiency of the robot is higher.
The determination of the fall sensor and the determination of the safety area may be performed simultaneously, or the determination of the fall sensor may be performed first and then the determination of the safety area may be performed, which is not intended to be limited in particular.
Fig. 6 illustrates a schematic structural diagram of a robot 100, according to some embodiments of the present application. Specifically, as shown in fig. 6, the robot 100 includes a processor 110, an infrared sensor 120, an ultrasonic sensor 130, a laser sensor 140, a camera 150, a memory 160, a moving part 170, and the like.
The processor 110 may be used to read and execute computer readable instructions. In particular implementations, processor 110 may include primarily controllers, operators, and registers. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, logic operations, and the like, and may also perform address operations and conversions. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In particular implementations, the hardware architecture of the processor 110 may be an Application Specific Integrated Circuit (ASIC) architecture, MIPS architecture, ARM architecture, NP architecture, or the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
For example, the NPU is a neural-network (NN) computing processor, and by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, input information is rapidly processed, and continuous self-learning can be performed. The NPU may be used for intelligent awareness of the robot 100, for example: image recognition, object recognition, etc.
In some embodiments of the present application, the NPU may identify an object or an object in the moving direction of the robot 100 by performing image recognition or object recognition on the scene image acquired by the camera 150, so that the robot 100 may further determine whether a fall may occur.
In some embodiments of the present application, the processor 110 may be configured to perform real-time processing, e.g., receive real-time signals, issue real-time instructions, etc. Specifically, the processor 110 may be a single chip microcomputer running a real-time operating system, or may be a special CPU in a chipset dedicated to processing real-time signals, etc.
The infrared sensor 120 is a sensor that performs distance measurement using physical properties of infrared rays, and the infrared sensor 120 emits infrared rays to the outside and calculates a distance to an external object from the reflected infrared rays. The infrared sensor is not in direct contact with the measured object during measurement, so that friction does not exist, and the infrared sensor has the advantages of high sensitivity, quick response and the like. The infrared sensor 120 has a disadvantage in that it cannot be used to detect a black or transparent object.
The ultrasonic sensor 130 is a sensor that measures a distance using ultrasonic waves, and emits ultrasonic waves having a vibration frequency higher than 20kHz, which have characteristics of high frequency, short wavelength, small diffraction phenomenon, and particularly good directivity, and can be directed to propagate as rays. The ultrasonic sensor 130 can measure transparent obstacles such as glass and water surface, but is susceptible to false alarm due to echo interference.
The laser sensor 140 is a sensor for performing distance measurement by using a laser technique, and can realize contactless distance measurement, and has a high speed, high accuracy, a large range, high light resistance, high electrical interference resistance, and the like. When the laser sensor 140 works, the laser emission diode emits laser pulses to the target, the laser is scattered in all directions after being reflected by the target, part of scattered light returns to the sensor receiver, the scattered light is received by the optical system and imaged on the avalanche photodiode, and the avalanche photodiode converts the detected optical signals into corresponding electric signals. The laser sensor 140 can determine the target distance by recording and processing the time that has elapsed from the emission of the light pulse to the return being received. The disadvantage of the laser sensor 140 is that it is highly air-affected and can only measure in a small range.
In some embodiments of the present application, the robot 100 may detect the distance between the robot 100 and the ground using an infrared sensor 120, an ultrasonic sensor 130, or a laser sensor 140.
The camera 150 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
In some embodiments of the present application, camera 150 may be a conventional camera used to obtain images of the environment surrounding robot 100, or a depth camera that may obtain distances of objects in the environment from the depth camera in addition to the images of the environment.
In some embodiments of the present application, the robot 100 may include at least one camera 150, the camera 150 for capturing images containing a target area or target object for object recognition.
Memory 160 is coupled to processor 110 for storing various software programs and/or sets of instructions. The memory may store various instructions in the embodiments of the present application, for example, a judging instruction, configured to judge whether a detection distance from the anti-falling sensor meets a preset anti-falling condition; the image identification instruction is used for carrying out object identification on the image shot by the camera; a positioning instruction for determining a current position of the robot; the movement instructions or stop movement instructions etc. may also be stored. In particular implementations, memory 160 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 160 may store an operating system, such as an embedded operating system, for example uCOS, vxWorks, RTLinux. Memory 160 may also store communication programs that may be used to communicate with a cell phone, one or more servers, or additional devices.
In some embodiments of the present application, the memory 160 may be used to store scene map data in which relevant safety areas are pre-marked, and the robot 100 may use the stored scene map data and its own position to determine whether it is in a safety area.
The movement part 170 is used to control the movement of the robot 100 according to instructions issued by the processor 110. The moving component 170 may include a number of motion-related components, such as motors, drive shafts, wheels, and the like. In some embodiments of the present application, the motion component 170 is used to implement various motion patterns of the robot 100, such as forward motion, backward motion, leftward motion, rightward motion, arcuate motion, and the like.
It will be appreciated that the configuration illustrated in fig. 6 does not constitute a particular limitation on the robot 100. In other embodiments of the present application, robot 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware or software, or a combination of software and hardware.
The following describes the technical solution of the present application in detail with reference to fig. 7 and 8 and with reference to a specific scenario in conjunction with the structure shown in fig. 6. As shown in fig. 7, the robot anti-falling scheme in some embodiments of the present application includes:
In step S701, the robot 100 acquires the distance between the robot and the support surface detected by the fall prevention sensor. The anti-drop sensor is disposed on the robot 100 for detecting the distance between the robot and the supporting surface, and in general, the distance detected by the sensor is the distance between the robot and the ground, but may be other distances, such as the distance between the robot and a certain level of stairs, etc. The anti-falling sensor can be an infrared sensor, an ultrasonic sensor, a laser sensor and other sensors of different types. The support surface may be various objects or areas in the current scenario that may provide some support to the robot 100, such as sewer manhole covers, stairs, carpeting, etc. The fall sensor also continuously detects the distance between the robot 100 and the support surface during the continuous movement of the robot 100 and provides the detected distance to the robot 100.
In some embodiments of the present application, at least one anti-falling sensor is disposed on the robot 100, and the plurality of anti-falling sensors can be uniformly distributed around the body of the robot 100, so that comprehensive detection on the surrounding environment of the robot 100 can be achieved, and detection dead zones are avoided.
In addition, when a plurality of anti-falling sensors are arranged on the robot 100, some anti-falling sensors are associated with the movement direction of the robot 100, other anti-falling sensors are not associated with the movement direction of the robot 100, the distance detection data provided by the associated anti-falling sensors has a larger value for anti-falling judgment of the robot 100, and the distance detection data provided by the non-associated anti-falling sensors has no value or has a smaller value for anti-falling judgment of the robot 100. For example, the robot 100 is provided with 4 anti-drop sensors respectively distributed in the 2 o ' clock direction, the 5 o ' clock direction, the 8 o ' clock direction and the 11 o ' clock direction on the circumference of the body of the robot 100, and when the robot 100 moves in the 12 o ' clock direction, the 11 o ' clock direction anti-drop sensors and the 2 o ' clock direction anti-drop sensors located at the left and right front positions of the movement direction of the robot 100 are anti-drop sensors associated with the movement direction, and the 8 o ' clock direction anti-drop sensors and the 5 o ' clock direction anti-drop sensors located at the left and right rear of the movement direction are anti-drop sensors not associated with the movement direction.
In some embodiments of the present application, the robot 100 may acquire distance detection data provided by the anti-falling sensor associated with the movement direction, and the robot 100 may not need to acquire distance detection data of the anti-falling sensor not associated with the movement direction, so as to reduce the processed distance detection data, save processing resources, and improve the operation efficiency of the robot 100.
In step S702, the robot 100 determines whether the distance between the robot and the supporting surface meets a preset anti-falling condition. Here, the fall protection condition is used to illustrate that the robot 100 is within a range of distances that may fall. When the detected distance provided by the anti-falling sensor satisfies the anti-falling condition, for example, the detected distance value is greater than or equal to a preset value, the robot 100 is determined to be in a distance range where falling is possible, for example, the detected distance value is smaller than the preset value, and when the detected distance does not satisfy the anti-falling condition, the robot 100 is determined not to be in the distance range where falling is possible.
In some embodiments of the present application, steps S701 and S702 may be performed using a processor, which may be, for example, a single chip microcomputer running a real-time operating system, or a special CPU in a chipset dedicated to processing real-time signals; steps S703 to S707 may be performed using another processor, where the real-time processing requirement of the processor is lower than that of the previous processor, for example, an industrial control computer, a single-chip microcomputer, a CPU, etc.
In step S703, if the anti-fall condition is satisfied, the robot 100 acquires an image photographed by a camera associated with the movement direction of the robot. Here, the fact that the detection distance provided by the anti-falling sensor meets the preset anti-falling condition indicates that the robot 100 is in a scene that may fall, and because the detection distance provided by the anti-falling sensor may have erroneous judgment, the current scene is confirmed through other data in the scheme of the application, and more accurate scene judgment is obtained.
In some embodiments of the present application, a camera is disposed on the robot 100, and the camera is used for capturing an environmental image in a moving direction of the robot 100. The environmental image may include obstructions, ground areas, and the like. Here, the camera may be a normal camera or a depth camera, the normal camera is used for capturing a two-dimensional image, the depth camera may be used for capturing a three-dimensional image, and the three-dimensional image further includes distance information between the object and the depth camera.
In some embodiments of the present application, the number of cameras provided on the robot 100 may be one or more. When the camera is one, the camera can be arranged on a rotatable device, so that shooting of the surrounding environment of the robot 100 is realized through rotation of the camera. When the robot 100 moves, the camera is rotated so that the shooting direction of the camera is consistent with the moving direction of the robot 100, and the camera becomes a camera associated with the moving direction of the robot, and the shot image can be used for further anti-falling judgment of the robot 100.
When the cameras are multiple, the shooting directions of the cameras can be separated, the shooting directions of the cameras are different, and the images shot by the cameras can be panoramic images of the surrounding environment of the robot 100. At this time, the cameras associated with the movement direction of the robot 100 are those capable of capturing images in the movement direction of the robot 100.
In some embodiments of the present application, the robot 100 may further check the related security signal, and determine whether to perform the recognition of the camera shooting image and the security scene judgment according to whether the security signal exists. The safety signal is state data stored after the robot judges that the robot is currently in the safety scene after executing the steps S703 to S705, and the safety signal can be used for indicating that the robot is currently in the safety scene, so that the safety scene judgment is not continuously performed after the robot judges that the robot is already in the safety scene, and unnecessary computing resource consumption is reduced. If the robot 100 checks that the security signal exists, the current scene may be directly determined as the security scene without recognition of the camera-captured image and security scene determination. If the robot 100 does not check the safety signal, a judgment of the safety scene is required. Here, the security signal may be provided with a timeout mechanism, i.e. no new security signal is received within a certain time (e.g. 100 ms), the original security signal is disabled.
In step S704, the robot 100 recognizes an object or region in the captured image. Here, the identification of the object or region in the image may be performed by a deep learning technique. Deep learning technology is one research direction in the field of machine learning and is an important component of artificial intelligence technology. The deep learning technology can learn the internal rule and the representation level of sample data, and the information obtained in the learning process can effectively explain the data such as characters, images, sounds and the like. Deep learning techniques enable machines to have analytical learning capabilities like humans to recognize text, images, sounds, etc.
The deep learning technology mainly relates to three types of methods: a neural network system based on convolution operation, a self-coding neural network based on multi-layer neurons and a deep confidence network. The convolutional neural network system based on convolution operation is a convolutional neural network, various types of convolutional neural networks have been developed so far, and the accuracy of identifying objects or areas in images is greatly improved. A self-encoding neural network is a neural network that detects characteristics of sample data in an unsupervised learning manner. The deep belief network is a neural network model of probability generation that can establish a joint distribution between sample data and object classification, consisting of multiple layers of boltzmann machines (Restricted Boltzmann Machines, RBM) restriction. In contrast to the neural network of the conventional discriminant model, the supervised learning method may be used, or the unsupervised learning method may be used.
In some embodiments of the present application, the robot 100 uses a pre-trained neural network recognition model to recognize objects such as stairs, steps, walls, gates, etc., or areas such as drainage grids, puddles, black light-absorbing carpets, transparent glass floors, etc., from images captured by cameras. Here, the neural network recognition model is usually obtained by training on an external server, and the trained neural network recognition model is transplanted to the robot 100. Here, the user may input an image including the object or the region as training data into a neural network of some kind to train, and the neural network obtained after training is a neural network recognition model capable of recognizing the object or the region from the image.
In step S705, the robot 100 determines whether the recognition result satisfies a preset safety scene condition. Here, the recognition result of the neural network recognition model on the image captured by the camera may be an object or an area in the current scene, such as a step, a puddle, a wall, a gate, and the like. The safety scene condition may be a scene that some anti-drop sensors set by a user are easy to misjudge but do not actually cause the robot to drop, such as a scene in which a drainage ditch grating, a black light absorption area, a transparent floor area, and the like exist. If the recognition result of the neural network recognition model is consistent with the object or region included in the safety scene condition, the recognition result may be considered to satisfy the safety scene condition, and the robot 100 is currently in the safety scene. For example, the recognition result of the neural network recognition module to the image shot by the camera is a black light absorption carpet, and if a scene with a black light absorption area exists in the safety scene condition, the recognition result meets the safety scene condition. For another example, the neural network recognition module recognizes that the result of the recognition of the image shot by the camera is a step, and a scene with no step exists in the safety scene condition, so that the recognition result does not meet the safety scene condition.
In step S706, if the recognition result does not satisfy the safety scene condition, the robot 100 transmits a movement stop instruction to the movement part 170. Here, the recognition result of the camera image by the robot 100 does not support the robot 100 in the safety scene, so that the detection of the anti-drop sensor does not have erroneous judgment, and the robot 100 does have a risk of dropping. Accordingly, the robot 100 transmits a stop motion instruction to the motion part 170, so that the motion part 170 stops the motion of the robot 100 according to the instruction, for example, stops power supply to a motor in the motion part 170, brakes wheels in the motion part 170, and the like, thereby causing the motion of the robot 100 in the motion direction.
In some embodiments of the present application, the stop motion instruction transmitted by the robot 100 is a stop motion instruction in a motion direction of the robot 100, according to which the motion part 170 of the robot 100 stops advancing in the motion direction, and the motion part 170 may execute a motion instruction in other directions than the motion direction, for example, execute a motion instruction in an opposite direction of the motion direction, so that the robot 100 performs retreating.
In step S707, if the recognition result satisfies the safety scene condition, the robot 100 continues to execute the motion instruction. Here, the recognition result satisfies the safety scene condition, which means that the robot 100 is currently in the safety scene, there is a false judgment in the detection of the anti-drop sensor, and in fact, the robot 100 does not have a risk of dropping, and the robot 100 may continue to advance in the moving direction, so the robot 100 continues to send a movement instruction to the movement component 170, and the movement component 170 executes the movement instruction so that the robot 100 can continue to move.
In addition, the robot 100 sets a safety signal indicating that the robot 100 is currently in a safety scene. The robot 100 may set the safety signal in a timed manner or may set the safety signal in a non-timed manner, and the frequency of the timed setting may be between 5Hz and 50 Hz; the non-timing mode may be set after the robot 100 is determined to be in a safe scene, for example.
If the execution of steps S701 and S702 uses one processor and the execution of steps S703 to S707 uses another processor, the processor executing steps S703 to S707 may send the safety signal to the execution of steps S701 and S702 in a timed manner or in an untimely manner after the robot 100 is determined to be in the safety scene, wherein the sending of the safety signal in a timed manner may be provided with a duration, after expiration of which the safety signal is not sent any more, thereby resetting the safety signal after the robot 100 leaves the safety scene.
In some embodiments of the present application, when a plurality of anti-falling sensors are disposed on the robot 100, one or more of the plurality of anti-falling sensors may be further specified to be shielded in the safety signal sent by the robot 100, so that the robot 100 may not process the distance detection data provided by the shielded anti-falling sensor according to the safety signal, only process the distance detection data provided by the unshielded anti-falling sensor, thereby making the robot 100 not be trapped and may autonomously leave the dangerous area in the unsafe scene to continue to work.
Fig. 8 shows a solution for robot anti-drop in other embodiments of the present application. As shown in fig. 8, the scheme includes:
steps S801 and S802 are the same as steps S701 and S702, respectively, and the description of steps S801 and S802 will refer to the description of steps S701 and S702 described above, and will not be repeated here.
Similarly, in some embodiments of the present application, steps S801 and S802 may be performed using a processor, which may be, for example, a single chip microcomputer running a real-time operating system, or a special CPU in a chipset dedicated to processing real-time signals; the steps S803 to S806 may be performed using another processor, and the real-time processing requirement of this processor is lower than that of the previous processor, for example, an industrial control computer, a single-chip microcomputer, a CPU, etc.
In step S803, the robot 100 acquires a position in the current scene. Here, determining the position of the robot in the current scene may be implemented using a robot indoor positioning technique. The robot indoor positioning technology can be roughly classified into three types in principle: proximity information method, scene analysis method, and geometric feature method. The proximity information method determines whether a point to be measured is in the vicinity of a certain reference point by using a limited range of signal actions, and can only provide approximate positioning information. The scene analysis method is to determine the position of the robot by measuring the intensity of the received signal at a certain position and comparing with the actually measured signal intensity at that position stored in a database. The geometric feature method is a method for positioning by utilizing a geometric principle, and needs to use the position information of a fixed base station or a known base station, and can be specifically classified into a trilateral positioning method, a triangular positioning method, a hyperbolic positioning method and the like.
The main indoor positioning process is to set auxiliary nodes with fixed positions in indoor environment, wherein the positions of the auxiliary nodes are known, and the position information is directly stored in the auxiliary nodes, such as radio frequency identification (Radio Frequency Identification, RFID) labels, and the position information is stored in a database of a computer terminal, such as infrared rays, ultrasonic waves and the like; the distance of the robot to the auxiliary node is then measured to determine the relative position.
In step S804, the robot 100 determines whether the current position is in a safe area in the scene map. Here, a scene map, which is a symbolized representation of an environment in which the robot performs a work task, for describing work environment information of the robot, is previously established for the environment in which the robot 100 operates.
In some embodiments of the present application, a scene map may be created by simultaneous localization and mapping techniques (Simultaneous Localization And Mapping, SLAM). Current SLAM methods can be broadly divided into two categories: methods based on probability models and methods based on non-probability models, for example, complete SLAM based on kalman filtering, compression filtering, fastsslam, etc.; examples of non-probabilistic model methods are SM-SLAM, scan matching, data fusion, fuzzy logic based, etc.
The scene map in the robot 100 is also marked with a safety area in advance, and the safety area identification information may include the range of the safety area, and the range of the safety area may be defined in various ways, for example, may be defined by geometric shapes such as a contour line, a polygon, and a circle. In addition, the identification of the secure enclave may be manually noted by the user. For example, the areas covered by the above-described example drain grating, transparent glass floor, black light-absorbing carpet are all calibrated in advance in the scene map as the safety area of the robot 100.
In step S805, if the current position of the robot 100 is not in the safety area, the robot 100 transmits a movement stop instruction to the movement part 170. The manner in which the movement part 170 stops the movement of the robot 100 according to the stop movement instruction is similar to that described in the above step S706, and will not be described again here.
In step S806, if the current position of the robot 100 is in the safety area, the robot 100 continues to execute the movement instruction. Similarly, the procedure of the robot 100 to continue the execution of the motion instruction and the manner of setting the safety signal are similar to those described in the above step S707, and are not repeated here.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the present application may be implemented as a computer program or program code that is executed on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (Digital Signal Processor, DSP), microcontroller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-Only memories (CD-ROMs), magneto-optical disks, read Only Memories (ROMs), random access memories (Random Access Memory, RAMs), erasable programmable Read-Only memories (Erasable Programmable Read Only Memory, EPROMs), electrically erasable programmable Read-Only memories (Electrically Erasable Programmable Read-Only memories, EEPROMs), magnetic or optical cards, flash Memory, or tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module is a logic unit/module, and in physical aspect, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is the key to solve the technical problem posed by the present application. Furthermore, to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems presented by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (7)

1. A robot anti-fall method, the method comprising:
the method comprises the steps of obtaining the distance between a robot and a supporting surface, wherein the distance is detected by anti-falling sensors, the anti-falling sensors are uniformly distributed around the body of the robot, and the robot obtains the distance provided by the anti-falling sensors related to the movement direction;
judging whether the distance meets a preset anti-falling condition or not, wherein the anti-falling condition is used for describing a distance range of the robot which is likely to fall down;
when the distance meets a preset anti-falling condition, determining whether the robot is in a safe area where falling cannot occur according to the environment, including: acquiring an image shot by a camera associated with the movement direction of the robot; identifying the image through a preset neural network identification model, and obtaining an image identification result, wherein the image identification result comprises an object or an area; judging whether an image recognition result meets a preset safety scene condition or not, wherein the safety scene condition is used for describing a scene that the robot cannot fall down; when the image recognition result does not meet the preset safety scene condition, determining that the robot is not in the safety area;
Determining whether the robot is in front of a safe area where no drop occurs according to the environment, further comprising: checking whether an effective safety signal exists or not, wherein the safety signal is state data stored after judging that the robot is currently in a safety scene, and a timeout mechanism is arranged; determining that the robot is in the safety zone when a valid safety signal is present;
and sending a motion stopping instruction to the robot when the robot is not in the safety area.
2. The method of claim 1, wherein determining whether the robot is in a safe area where no drop will occur based on the environment, further comprises:
and determining whether the robot is in a safe area where falling cannot occur or not through the safe area calibrated in the positioning and scene map of the robot.
3. The method of claim 1, wherein the security scene condition comprises at least one of: drainage grating, black light absorbing area and transparent floor area.
4. The method of claim 2, wherein determining whether the robot is in a safe area where no drop will occur from the location of the robot and the calibrated safe area in the scene map comprises:
Acquiring the position of the robot in the current scene;
judging whether the position is in a safety area marked in a scene map or not;
and when the position is not in the safety area calibrated in the scene map, determining that the robot is not in the safety area.
5. The method according to claim 1, characterized in that the method further comprises:
and under the condition that the robot is in the safety area, continuing to execute the motion instruction and setting a safety signal.
6. A robot, comprising:
a memory for storing instructions for execution by the first processor or the second processor, an
The first processor is one of a plurality of processors of the robot and is used for acquiring the distance between the robot detected by the anti-falling sensor and the supporting surface and judging whether the distance meets a preset anti-falling condition, wherein the anti-falling sensors are uniformly distributed around the body of the robot, the robot acquires the distance provided by the anti-falling sensor related to the movement direction, and the anti-falling condition is used for describing the range of the distance that the robot can fall down;
the second processor is one of a plurality of processors of the robot, and is used for determining whether the robot is in a safe area where falling cannot occur according to the environment when the distance meets a preset falling prevention condition, and comprises the following steps: acquiring an image shot by a camera associated with the movement direction of the robot; identifying the image through a preset neural network identification model, and obtaining an image identification result, wherein the image identification result comprises an object or an area; judging whether an image recognition result meets a preset safety scene condition or not, wherein the safety scene condition is used for describing a scene that the robot cannot fall down; when the image recognition result does not meet the preset safety scene condition, determining that the robot is not in the safety area; determining whether the robot is in front of a safe area where no drop occurs according to the environment, further comprising: checking whether an effective safety signal exists or not, wherein the safety signal is state data stored after judging that the robot is currently in a safety scene, and a timeout mechanism is arranged; determining that the robot is in the safety zone when a valid safety signal is present; and sending a stop motion instruction to the robot if the robot is not in the safe area;
At least one anti-falling sensor for detecting the distance between the robot and the supporting surface;
the motion component is used for receiving a motion stopping instruction sent by the second processor and stopping the motion of the robot according to the motion stopping instruction; and receiving a motion instruction sent by the second processor and continuing the motion of the robot according to the motion instruction.
7. A readable storage medium having instructions stored thereon that when executed on a robot cause the robot to perform the robot anti-roll off method of any one of claims 1-5.
CN202110886723.6A 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium Active CN113524265B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110886723.6A CN113524265B (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium
CN202310454027.7A CN116372990A (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110886723.6A CN113524265B (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310454027.7A Division CN116372990A (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN113524265A CN113524265A (en) 2021-10-22
CN113524265B true CN113524265B (en) 2023-05-26

Family

ID=78090323

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110886723.6A Active CN113524265B (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium
CN202310454027.7A Pending CN116372990A (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310454027.7A Pending CN116372990A (en) 2021-08-03 2021-08-03 Robot anti-falling method, robot and readable storage medium

Country Status (1)

Country Link
CN (2) CN113524265B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114111703A (en) * 2021-11-24 2022-03-01 上海景吾智能科技有限公司 Falling detection system and robot
CN114200935A (en) * 2021-12-06 2022-03-18 北京云迹科技股份有限公司 Robot anti-falling method and device, electronic equipment and storage medium
CN114494848B (en) * 2021-12-21 2024-04-16 重庆特斯联智慧科技股份有限公司 Method and device for determining vision path of robot
CN114770604B (en) * 2022-05-18 2024-01-16 深圳优地科技有限公司 Robot test system
CN117095342B (en) * 2023-10-18 2024-02-20 深圳市普渡科技有限公司 Drop zone detection method, drop zone detection device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8862271B2 (en) * 2012-09-21 2014-10-14 Irobot Corporation Proximity sensing on mobile robots
KR20190002152A (en) * 2017-06-29 2019-01-08 엘지전자 주식회사 Method of identifying entry restriction and robot implementing thereof
CN112975995B (en) * 2019-12-17 2022-08-16 沈阳新松机器人自动化股份有限公司 Service robot chassis anti-falling array device and anti-falling method
CN110852312B (en) * 2020-01-14 2020-07-17 深圳飞科机器人有限公司 Cliff detection method, mobile robot control method, and mobile robot

Also Published As

Publication number Publication date
CN113524265A (en) 2021-10-22
CN116372990A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN113524265B (en) Robot anti-falling method, robot and readable storage medium
US20200306983A1 (en) Mobile robot and method of controlling the same
US10553044B2 (en) Self-diagnosis of faults with a secondary system in an autonomous driving system
JP5946147B2 (en) Movable human interface robot
EP3602396A1 (en) Embedded automotive perception with machine learning classification of sensor data
US11145146B2 (en) Self-diagnosis of faults in an autonomous driving system
US20180067490A1 (en) Pre-tracking sensor event detection and fusion
JP7147420B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND COMPUTER PROGRAM FOR OBJECT DETECTION
CN106950952B (en) Farmland environment sensing method for unmanned agricultural machinery
Ponnada et al. A hybrid approach for identification of manhole and staircase to assist visually challenged
US20130093852A1 (en) Portable robotic device
US20170347066A1 (en) Monitor apparatus and monitor system
CN110443275B (en) Method, apparatus and storage medium for removing noise
US20200200545A1 (en) Method and System for Determining Landmarks in an Environment of a Vehicle
KR20200027087A (en) Robot and the controlling method thereof
Amin et al. Quality of obstacle distance measurement using ultrasonic sensor and precision of two computer vision-based obstacle detection approaches
CN115424233A (en) Target detection method and target detection device based on information fusion
CN113158779A (en) Walking method and device and computer storage medium
CN114966714A (en) Window occlusion detection method and device
Dong et al. Indoor tracking using crowdsourced maps
CN114365003A (en) Adjusting device and laser radar measuring device
CN113534805B (en) Robot recharging control method, device and storage medium
JP2008009927A (en) Mobile robot
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product
KR20220107920A (en) Prediction of the attitude of pedestrians in front based on deep learning technology using camera image and collision risk estimation technology using this

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant