WO2023217193A1 - Robot and method for robot to recognise fall - Google Patents

Robot and method for robot to recognise fall Download PDF

Info

Publication number
WO2023217193A1
WO2023217193A1 PCT/CN2023/093320 CN2023093320W WO2023217193A1 WO 2023217193 A1 WO2023217193 A1 WO 2023217193A1 CN 2023093320 W CN2023093320 W CN 2023093320W WO 2023217193 A1 WO2023217193 A1 WO 2023217193A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
target
target part
sole
detected
Prior art date
Application number
PCT/CN2023/093320
Other languages
French (fr)
Chinese (zh)
Inventor
骆张强
许鲤蓉
Original Assignee
神顶科技(南京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 神顶科技(南京)有限公司 filed Critical 神顶科技(南京)有限公司
Publication of WO2023217193A1 publication Critical patent/WO2023217193A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation

Definitions

  • This application relates to the field of artificial intelligence technology, specifically to robots and robot fall recognition technology.
  • the sweeping robot is a type of robot. It is a smart home appliance that can clean the indoor floor environment and bring a certain degree of convenience to people's lives.
  • existing robots can only achieve cleaning functions and do not have Some other safety monitoring functions, such as fall detection and alarm, are especially targeted at scenarios where the elderly live alone or only the elderly and children are at home, so they are not fully intelligent and convenient.
  • most of the existing safety detection functions or fall detection functions are implemented using some gravity sensors or cameras with higher field of view, but these are not suitable for sweeping robots.
  • the purpose of this application is to provide a robot and a method for the robot to identify falls, which can realize automatic detection and identification of falls by the robot, and can better monitor the safety of family members while taking into account its own service functions.
  • This application discloses a method for a robot to identify falls, which includes the following steps:
  • Robot A moves position, detects the first target part of the foot or sole, and records the position and attitude of the detected first target part;
  • C determines whether the current object is in a fallen state based on the detected position of the target part.
  • the positions of the detected target parts are respectively the positions of their centers of gravity
  • the step C further includes the following steps:
  • step B further includes the following steps:
  • the robot is controlled to move the position and/or change the posture according to the estimated position to perform detection of other target parts, and record the detected positions of other target parts.
  • step B further includes the following steps:
  • the step C further includes the following steps:
  • step B further includes the following steps:
  • the robot is controlled to perform N-circle position movements around the outline of the object associated with the first target part, and correspondingly performs N times of detection and recording of the target part each time The location of the detected target part;
  • the step C further includes the following steps:
  • the current object is determined to be in a fallen state.
  • the binocular vision unit of the robot is used to collect the depth information of the sole of the foot or sole, and the size of the sole of the foot or sole is determined based on the collected depth information;
  • the other target parts include the face and/or legs.
  • step B further includes the step of: recording the detected postures of other target parts when detecting other target parts;
  • the step C also includes the following step: judging whether the current object is in a fallen state according to the detected position and posture of the target part.
  • the robot uses a target recognition model to detect the target part
  • the target recognition model is trained according to the following steps:
  • the robot is used to collect a set of sample images.
  • the sample images include facial images, foot images and leg images associated with each foot image.
  • the facial images include images with different shooting distances and different facial angles and postures, so
  • the above-mentioned foot images include images of soles and shoe soles with different bottom orientations, and each image is marked with a category label and posture label;
  • the robot is a robot that moves close to the ground.
  • the robot is a sweeping robot.
  • This application also discloses a robot including:
  • the target part detection module is used to detect the first target part of the sole or shoe sole when the robot moves, and record the detected position and posture of the first target part, and record the position and posture of the first target part according to the position and position of the first target part.
  • Attitude controls the robot to move its position and/or change its attitude to perform detection of other target parts, and record the detected positions of other target parts;
  • a fall determination module is used to determine whether the current object is in a fallen state according to the detected position of the target part.
  • the positions of the detected target parts are respectively the positions of their centers of gravity
  • the fall determination module is also used to determine whether the lines connecting the center of gravity positions of all the target parts tend to be in the same straight line or the same plane and the angle between the straight line or the plane and the ground is less than a threshold angle. If so, it is determined that the current object is in a fallen state. .
  • the target part detection module is also used to determine whether the current object is suspected of falling based on the position and posture of the first target part, and to estimate the positions of other target parts of the suspected falling object, and based on the estimated
  • the position control robot moves the position and/or changes the attitude to perform detection of other target parts, and records the detected positions of other target parts.
  • the target part detection module is also configured to continuously collect multiple frames of data of the other target parts and record the position of the other target parts in each frame when the other target parts are detected;
  • the fall determination module is also used to determine whether the current object is in a fallen state based on the detected change in the position of the target part. If the positions of other target parts in each frame change within a preset range, it is determined that the current object is in a fall state. state.
  • the target part detection module is also used to control the robot to surround the outline of an object associated with the first target part according to the position and attitude of the first target part. Carry out N circle position movements, and perform corresponding detection of the target part N times and record the position of the target part detected each time; and,
  • the fall determination module is also used to determine that the current object is in a fallen state if the positions of the target parts detected N times are all the same or change within a preset range.
  • the robot further includes a binocular vision unit for collecting depth information of the soles of the feet or shoe soles;
  • the target part detection module is also configured to determine the size of the sole or sole based on the depth information of the sole or sole collected by the binocular vision unit if the sole or sole is detected, and determine the size of the sole or sole. Whether the size of the shoe sole meets the preset conditions, if so, the sole or sole is determined to be the first target part and the position of the first target part is recorded; otherwise, the detection of the first target part is continued.
  • the other target parts include the face and/or legs.
  • the target part detection module is also used to record the detected postures of other target parts when performing detection of other target parts;
  • the fall determination module is also configured to determine whether the current object is in a fallen state based on the detected position and posture of the target part.
  • system further includes an image acquisition unit, and the target part detection module further includes a target recognition model;
  • the image acquisition unit is used to collect a set of sample images.
  • the sample images include facial images, foot images and leg images associated with each foot image.
  • the facial images include images from different shooting distances and different facial angles and postures.
  • Image, the foot image includes sole and sole images of different bottom orientation postures, each image is marked with a category label and posture label, and the target recognition module is obtained by training a deep neural network using the sample image set;
  • the image acquisition unit is also used to collect images when the robot moves its position, and the collected images are The image is input into the target recognition model to detect the target part.
  • the embodiments of the present application include at least the following advantages and beneficial effects: Aiming at the unique low viewing angle of robots that move close to the ground (such as sweeping robots), the present invention proposes low-view image processing and recognition dedicated to robots. This method realizes the automatic detection and recognition of falls by the robot, and can better monitor the safety of family members (especially the elderly and children) while taking into account cleaning.
  • the trained model can directly identify the category and posture of the target part, and thus can effectively determine the identity of family members. Fall situations. The fall situation is determined based on the detected position, position change, or combination of position and posture of the target parts of the face, legs, and feet, with high accuracy.
  • Figure 1 is a schematic flowchart of a method for identifying falls by a robot according to the first embodiment of the present application.
  • Figure 2 is a schematic flowchart of a method for a robot to identify falls according to an embodiment of the present application.
  • Figure 3 is a schematic flowchart of a method for a robot to identify falls according to another embodiment of the present application.
  • Figure 4 is a schematic structural diagram of a robot according to the second embodiment of the present application.
  • the first embodiment of the present application relates to a method for a robot to identify falls.
  • the process is shown in Figure 1.
  • the method includes the following steps:
  • Step 101 The robot moves its position, detects the first target part of the sole or sole, and records the position and posture of the detected first target part;
  • Step 102 Control the robot to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts;
  • Step 103 Determine whether the current object is in a fallen state according to the detected position of the target part.
  • the robot performs position movement, performs detection of the first target part of the sole or shoe sole, and records the position and posture of the detected first target part.
  • the robot can be a robot that moves close to the ground, with its camera positioned close to the ground.
  • the robot may be a sweeping robot usually used for indoor cleaning, and its height is usually less than 20 cm.
  • step 101 when performing the detection of the first target part of the sole or shoe sole in step 101, the following steps are also included:
  • the robot's binocular vision unit is used to collect depth information of the foot sole or shoe sole, and the size of the foot sole or shoe sole is determined based on the collected depth information;
  • step 102 the robot is controlled to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts.
  • the other target parts in step 102 can be any part on the human body, and the other target parts include, for example, the face and/or the legs.
  • this step 102 may further include the following sub-steps 102a and 102b:
  • Step 102a Determine that the current object is suspected of falling based on the position and posture of the first target part, and estimate the positions of other target parts of the suspected fallen object;
  • Step 102b Control the robot to move the position and/or change the posture according to the estimated position to perform detection of other target parts, and record the detected positions of other target parts.
  • the robot can detect the position of the target part in an implementation manner.
  • the binocular vision unit can use a disparity map of the target part to obtain the position information of the target part.
  • step 103 it is determined whether the current object is in a fallen state according to the detected position of the target part.
  • Step 103 can be implemented in various specific ways.
  • the detected positions of the target parts are respectively the positions of their centers of gravity; in this optional embodiment, step 103 may further include the following steps: determining whether the lines connecting the positions of the centers of gravity of all the target parts tend to be same straight line or same A plane and the angle between the straight line or plane and the ground is less than the threshold angle. If so, the current object is determined to be in a fallen state.
  • step 102 when the other target part is detected, continuously collecting multiple frames of data of the other target part and recording the position of the other target part in each frame; and, step 103 It can be further implemented as: judging whether the current object is in a fallen state based on the detected changes in the positions of the other target parts. For example, but not limited to, if it is detected that the positions of the other target parts change within a preset range in each frame, then it is judged that the current object is in a fallen state. Falling state.
  • step 102 may be further implemented as: controlling the robot to perform N circle position movements around the outline of the object associated with the first target part according to the position and attitude of the first target part, and correspondingly executing N Detect the target part several times and record the position of the target part detected each time; and step 103 may further include the following steps: If the positions of the target part detected N times are all the same or change within a preset range, then Determine the current object to be in a fallen state.
  • step 102 may also include the step of: recording the detected postures of other target parts when detecting other target parts; and step 103 may be further implemented as: according to the detected target part The position and posture of the object are used to determine whether the current object is in a fallen state.
  • the robot uses a target recognition model to detect the target part.
  • the target recognition model is trained in the following manner: using the robot to collect a sample image set, the sample image includes a face image, a foot image and a leg image associated with each foot image, and the face image contains different Images taken at different distances and different facial angles and postures.
  • the foot images include images of soles and shoe soles with different bottom orientations.
  • Each image is marked with a category label.
  • This sample image collection is used to train a deep neural network to obtain the target recognition model.
  • each image can also be identified with a pose label, that is, both a category label and a pose label are identified.
  • the viewing angle of sweeping robots is generally low, basically about 6-7cm or lower from the ground, at such a low viewing angle, when the distance is far, the human body seen is not clear enough and cannot be accurately seen. Recognition; when the distance is close, the human body cannot be completely within the viewing angle.
  • Conventional models that use the human face as a detection or the entire human body as a training model cannot be applied to sweeping robots.
  • the combination of foot and leg sample data is used as the main sample data for training, and the facial data is used as the auxiliary sample data to train the model.
  • the human face seen from the sweeping robot's perspective is generally very close to the camera, has a low viewing angle, is tilted, or has only partial ears. Or half-face outline features; to address this feature or limitation, image collection can be performed, for example, at long distances, ultra-close ranges, lying-down angles, or face-down angles. And, for example, after collecting the photos, perform image processing on them: cut out the pictures to obtain facial features; enlarge the facial features; and perform image amplification on the enlarged facial features.
  • the foot features seen from the perspective of the sweeping robot need to be divided into two types: with shoes and without shoes. Both types of data require regular angle, abnormal angle, and bottom-up collection and data enhancement for model training.
  • the depth information obtained by the binoculars will be combined for further judgment.
  • the characteristic width of the feet is within a certain range (31cm). If the width information of the object obtained by the binoculars exceeds this value, the object is considered not to be a "foot”. department”.
  • the leg features seen from the perspective of the sweeping robot are similar to those of objects such as furniture legs when the distance is far away. When the leg features are close to the sweeping robot, they are close to the features of objects such as bases. Therefore, for the characteristics of the legs, it is necessary to synchronously determine whether there are associated foot characteristics after detection. Usually the foot characteristics are associated with the leg characteristics. Once marked as leg characteristics, the location of the leg will be Record and save.
  • the data can be processed before training the above model.
  • the main control chips used by robots generally only support low computing power. Some main control chips do not even have an NPU and can only use the CPU to load and calculate some models. Therefore, when training the model, we must take into account the two dimensions of the model's computing power consumption and accuracy.
  • Model training first requires preprocessing of the collected data, which is called data cleaning, that is, various checks and reviews of the data are performed to correct missing values and normalize/standardize the values to make them comparable.
  • After obtaining the processed data perform data labeling, such as labeling
  • the target recognition model can be trained and recognized as a separate self-learning model, and the detection and recognition of other obstacles can be treated as another model without self-learning capabilities.
  • the robot's viewing angle is low, obstacles or objects that usually affect the robot's work are generally relatively small in size.
  • a robot that can recognize falls needs to recognize some human body features. Therefore, when selecting a model, two different factors need to be taken into consideration. Scenes. Therefore, the present invention adopts the method of superimposing two models. Based on two superimposed model systems, there will also be corresponding processes for subsequent processing of recognized objects.
  • the overall flow chart is as follows: First, the robot will perform target detection of indoor objects, and will call the obstacle detection model when detecting regular obstacles.
  • the robot after the robot detects the object trained in the model, the robot will record and mark the information of this type of obstacle, and then update the information to the robot's map. The robot will make corresponding actions based on the characteristics of the obstacle.
  • the obstacle In the present invention, the obstacle is basically bypassed. If no obstacles of the category defined in the model are identified, the robot continues cleaning and performs target detection. For fall detection, the robot will use the face detection model for identification. The condition for falling is that the features of "face”, "legs” and “feet" are detected at the same time, and the centers of gravity of these three areas are based on the same horizontal line. After the robot detects the fall signal, it will trigger the alarm module on the robot and send out an alarm signal.
  • an alarm signal or alarm information can be sent to the mobile phones of other family members, such as but not limited to sending alarm information in the form of mobile phone text messages.
  • Figure 2 shows a flow chart of a method for a robot to recognize a fall according to one embodiment of the present application
  • Figure 3 shows a flow chart of a method for a robot to recognize a fall according to another embodiment of the present application.
  • the details listed in these two embodiments are mainly for ease of understanding and are not intended to limit the protection of this application. Scope limitations.
  • the second embodiment of the present application relates to a robot, the structure of which is shown in Figure 4.
  • the robot includes a target part detection module and a fall judgment module.
  • the target part detection module is used to detect the first target part of the sole or shoe sole when the robot moves position, and record the position and attitude of the detected first target part, and the position and attitude according to the first target part. Control the robot to move the position and/or change the posture to perform detection of other target parts, and record the detected positions of other target parts; the fall determination module is used to determine whether the current object has fallen based on the detected position of the target part. state.
  • the target part detection module is also used to determine whether the current object is suspected of falling based on the position and attitude of the first target part, estimate the positions of other target parts of the suspected fallen object, and control the movement of the robot according to the estimated positions. position and/or change posture to perform detection of other target parts, and record the detected positions of other target parts.
  • the robot further includes a binocular vision unit, and the robot obtains the position information of the target part by using, for example, but not limited to, the disparity map of the target part using the binocular vision unit.
  • the binocular vision unit is also used to collect the depth information of the sole of the foot or the sole of the shoe;
  • the target part detection module is also used to: if the sole of the foot or the sole of the shoe is detected, based on the depth information of the sole of the foot or the sole of the shoe collected by the binocular vision unit
  • the depth information determines the size of the sole of the foot or sole, and determines whether the size of the sole of the foot or sole meets the preset conditions. If so, determines the sole of the foot or sole as the first target part and records the position of the first target part. Otherwise, continue execution. Detection of the first target site.
  • the other target parts in this application can be any part on the human body, and the other target parts include, for example, the face and/or the legs.
  • the system further includes an image acquisition unit, and the target part detection module further includes a target recognition model.
  • the image acquisition unit is used to collect a sample image set.
  • the sample image includes a facial image, a foot image, and a leg image associated with each foot image.
  • the facial image includes images of different shooting distances and different facial perspective postures.
  • the foot image contains different bottom orientations Images of soles and soles of postures. Each image is marked with a category label and posture label.
  • the target recognition module uses the sample image collection to train a deep neural network.
  • the image acquisition unit is also used to collect images when the robot moves. Image, input the collected image into the target recognition model to detect the target part.
  • the detected positions of the target parts are respectively the positions of their centers of gravity; the fall determination module is also used to determine whether the lines connecting the positions of the centers of gravity of all the target parts tend to be the same straight line or the same plane and the straight line or The angle between the plane and the ground is less than the threshold angle. If so, the current object is determined to be in a fallen state.
  • the target part detection module is also configured to continuously collect multiple frames of data of the other target part and record the position of the other target part in each frame when the other target part is detected; and the fall determination module It is also used to determine whether the current object is in a fallen state based on the detected change in the position of the target part. If the positions of other target parts in each frame change within a preset range, it is determined that the current object is in a fallen state.
  • the target part detection module is also used to control the robot to perform N-circle position movements around the outline of the object associated with the first target part according to the position and attitude of the first target part, and perform corresponding execution Detect the target part N times and record the position of the target part detected each time; and, the fall judgment module is also used to if the positions of the target part detected N times are all the same or change within a preset range, then Determine the current object to be in a fallen state.
  • the target part detection module is also used to record the detected postures of other target parts when performing the detection of other target parts; and the fall determination module is also used to record the detected postures of the target part according to the detected target part.
  • the position and posture of the object are used to determine whether the current object is in a fallen state.
  • the first embodiment is a method implementation corresponding to this embodiment.
  • the technical details in the first embodiment can be applied to this embodiment, and the technical details in this embodiment can also be applied to the first embodiment.
  • the robot of the present invention can include all functional modules of existing robots. and units, such as but not limited to, may include battery units, power management and generator drives, motor units, sensor matrix units, storage units, navigation and positioning units, mobile modem units, wifi units, control units, etc.
  • each module shown in the embodiment of the robot can be understood with reference to the relevant description of the method for identifying a fall by the robot.
  • the functions of each module shown in the above embodiments of the robot can be implemented by programs (executable instructions) running on the processor, or by specific logic circuits. If the above-mentioned robot in the embodiment of the present application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or those that contribute to the existing technology.
  • the computer software products are stored in a storage medium and include a number of instructions to A computer device (which may be a personal computer, a server, a network device, etc.) is caused to execute all or part of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read Only Memory), magnetic disk or optical disk and other media that can store program code. As such, embodiments of the present application are not limited to any specific combination of hardware and software.
  • embodiments of the present application also provide a computer-readable storage medium in which computer-executable instructions are stored.
  • Computer-readable storage media includes permanent and non-transitory, removable and non-removable media and may be implemented by any method or technology to store information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, disk storage or other magnetic storage devices or any other non-transmission medium that can be used for storage that can be accessed by a computing device Ask for information.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • magnetic tape cassettes disk storage or other magnetic storage devices or any other non-transmission medium that can be used for storage that can be accessed by a
  • embodiments of the present application also provide a robot, which includes a memory for storing computer-executable instructions, and a processor; the processor is used to implement the above methods when executing the computer-executable instructions in the memory. steps in the way.
  • the processor can be a central processing unit (Central Processing Unit, referred to as "CPU"), a graphics processor (Graphic Processing Unit, referred to as “GPU”), a digital signal processor (Digital Signal Processor, referred to as "DSP”), Microcontroller Unit (MCU), Neural Network Processor (NPU), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array, (referred to as "FPGA”) or other programmable logic devices, etc.
  • CPU Central Processing Unit
  • GPU Graphic Processing Unit
  • DSP Digital Signal Processor
  • MCU Microcontroller Unit
  • NPU Neural Network Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the aforementioned memory can be read-only memory (read-only memory, referred to as "ROM”), random access memory (random access memory, referred to as "RAM”), flash memory (Flash), hard disk or solid state drive, etc.
  • ROM read-only memory
  • RAM random access memory
  • flash flash memory
  • the steps of the method disclosed in each embodiment of the present invention can be directly implemented by a hardware processor, or can be executed by a combination of hardware and software modules in the processor.
  • an act is performed based on a certain element, it means that the act is performed based on at least that element, which includes two situations: performing the act based on that element only, and performing the act based on both that element and Other elements perform this behavior.
  • various Expressions include 2, 2 times, 2 kinds, and 2 or more, 2 or more times, and 2 or more kinds.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

A robot and a method for the robot to recognise a fall, relating to the technical field of artificial intelligence. The method for a robot to recognise a fall comprises: a robot moves position and executes detection of a first target part of a foot sole or a shoe sole, and records the position and posture of the detected first target part (101); on the basis of the position and posture of the first target part, controlling the movement position and/or changing the posture of the robot so as to execute detection of other target parts, and recording the positions of the detected other target parts (102); and, on the basis of the positions of the detected target parts, determining whether the current object is in a fall state (103). Automatic fall detection and recognition by the robot can be implemented, better monitoring the safety of family members whilst taking care of cleaning.

Description

机器人和机器人识别跌倒的方法Robots and methods for robots to recognize falls 技术领域Technical field
本申请涉及人工智能技术领域,特别涉及机器人和机器人识别跌倒技术。This application relates to the field of artificial intelligence technology, specifically to robots and robot fall recognition technology.
背景技术Background technique
随着人工智能的发展和生活水平的提高,人们对智能家电的要求越来越高,出现了各种机器人。扫地机器人就是机器人的一种,它是一种智能家电,能够对室内的地面环境进行清理,给人们的生活带来一定程度的便利,但是,现有的机器人只能实现清扫功能,并不具备其他的一些安全监控功能如跌倒检测报警,特别针对独居老人或只有老人小孩在家的场景,故没有实现完全的智能和便捷。此外,现有的安全检测功能或跌倒检测功能的大都采用一些重力传感器或视野位置较高的相机来实现,但是这些均不适用于扫地机器人。With the development of artificial intelligence and the improvement of living standards, people have higher and higher requirements for smart home appliances, and various robots have emerged. The sweeping robot is a type of robot. It is a smart home appliance that can clean the indoor floor environment and bring a certain degree of convenience to people's lives. However, existing robots can only achieve cleaning functions and do not have Some other safety monitoring functions, such as fall detection and alarm, are especially targeted at scenarios where the elderly live alone or only the elderly and children are at home, so they are not fully intelligent and convenient. In addition, most of the existing safety detection functions or fall detection functions are implemented using some gravity sensors or cameras with higher field of view, but these are not suitable for sweeping robots.
发明内容Contents of the invention
本申请的目的在于提供一种机器人和机器人识别跌倒的方法,可以实现机器人对跌倒的自动检测识别,在兼顾自身服务功能的同时可以更好的对家庭成员的安全进行监测。The purpose of this application is to provide a robot and a method for the robot to identify falls, which can realize automatic detection and identification of falls by the robot, and can better monitor the safety of family members while taking into account its own service functions.
本申请公开了一种机器人识别跌倒的方法,包括以下步骤:This application discloses a method for a robot to identify falls, which includes the following steps:
A机器人进行位置移动,执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态;Robot A moves position, detects the first target part of the foot or sole, and records the position and attitude of the detected first target part;
B根据所述第一目标部位的位置和姿态控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置; B. Control the robot to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts;
C根据检测到的所述目标部位的位置判断当前对象是否为跌倒状态。C determines whether the current object is in a fallen state based on the detected position of the target part.
在一个优选例中,所述检测到的目标部位的位置分别为其重心位置;In a preferred example, the positions of the detected target parts are respectively the positions of their centers of gravity;
所述步骤C进一步包括以下步骤:The step C further includes the following steps:
判断所有所述目标部位的重心位置的连线是否趋于同一直线或同一平面且该直线或平面与地面的夹角小于阈值角度,若是则判定当前对象为跌倒状态。Determine whether the line connecting the center of gravity positions of all the target parts tends to the same straight line or the same plane and the angle between the straight line or plane and the ground is less than the threshold angle. If so, it is determined that the current object is in a fallen state.
在一个优选例中,所述步骤B进一步包括以下步骤:In a preferred example, step B further includes the following steps:
根据所述第一目标部位的位置和姿态判断当前对象疑似跌倒,并估计所述疑似跌倒对象的其他目标部位的位置;Determine whether the current object is suspected of falling based on the position and posture of the first target part, and estimate the positions of other target parts of the suspected falling object;
根据估计的位置控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。The robot is controlled to move the position and/or change the posture according to the estimated position to perform detection of other target parts, and record the detected positions of other target parts.
在一个优选例中,所述步骤B之后还包括以下步骤:In a preferred example, step B further includes the following steps:
当检测到所述其他目标部位时,连续采集多帧所述其他目标部位的数据并记录每帧所述其他目标部位的位置;When the other target part is detected, continuously collect multiple frames of data of the other target part and record the position of the other target part in each frame;
所述步骤C进一步包括以下步骤:The step C further includes the following steps:
根据检测到的所述其他目标部位的位置的变化判断当前对象是否为跌倒状态,若所述每帧其他目标部位的位置在预设范围内变化则判定当前对象为跌倒状态。It is determined whether the current object is in a fallen state according to the detected changes in the positions of the other target parts. If the positions of the other target parts in each frame change within a preset range, it is determined that the current object is in a fallen state.
在一个优选例中,所述步骤B进一步包括以下步骤:In a preferred example, step B further includes the following steps:
根据所述第一目标部位的位置和姿态控制所述机器人环绕与所述第一目标部位相关联的对象的轮廓进行N圈位置移动,并对应执行N次所述目标部位的检测和记录每次检测到的目标部位的位置;According to the position and attitude of the first target part, the robot is controlled to perform N-circle position movements around the outline of the object associated with the first target part, and correspondingly performs N times of detection and recording of the target part each time The location of the detected target part;
所述步骤C进一步包括以下步骤: The step C further includes the following steps:
如果检测到的N次目标部位的位置均相同或均在预设范围内变化,则判定当前对象为跌倒状态。If the positions of the target parts detected N times are all the same or change within a preset range, the current object is determined to be in a fallen state.
在一个优选例中,所述执行脚底或鞋底的第一目标部位检测时,还包括以下步骤:In a preferred example, when detecting the first target part of the foot sole or shoe sole, the following steps are further included:
如果检测到脚底或鞋底,则利用所述机器人的双目视觉单元采集所述脚底或鞋底的深度信息,基于采集的深度信息确定所述脚底或鞋底的尺寸;If the sole of the foot or sole is detected, the binocular vision unit of the robot is used to collect the depth information of the sole of the foot or sole, and the size of the sole of the foot or sole is determined based on the collected depth information;
判断所述脚底或鞋底的尺寸是否满足预设条件,若满足则判定所述脚底或鞋底为第一目标部位并记录所述第一目标部位的位置,否则继续进行位置移动和执行脚底或鞋底的第一目标部位的检测。Determine whether the size of the sole of the foot or sole meets the preset condition. If so, determine the sole of the foot or sole as the first target part and record the position of the first target part. Otherwise, continue to move the position and execute the sole of the foot or sole. Detection of the first target part.
在一个优选例中,所述其他目标部位包括脸部和/或腿部。In a preferred example, the other target parts include the face and/or legs.
在一个优选例中,所述步骤B还包括以步骤:在执行其他目标部位的检测时,记录检测到的其他目标部位的姿态;In a preferred example, step B further includes the step of: recording the detected postures of other target parts when detecting other target parts;
所述步骤C还包括以下步骤:根据检测到的所述目标部位的位置和姿态判断当前对象是否为跌倒状态。The step C also includes the following step: judging whether the current object is in a fallen state according to the detected position and posture of the target part.
在一个优选例中,所述机器人利用目标识别模型进行所述目标部位的检测;In a preferred example, the robot uses a target recognition model to detect the target part;
所述目标识别模型根据以下步骤训练得到:The target recognition model is trained according to the following steps:
利用所述机器人采集样本图像集合,所述样本图像包括面部图像、脚部图像和与每张脚部图像关联的腿部图像,所述面部图像包含不同拍摄距离和不同面部视角姿态的图像,所述脚部图像包含不同底部朝向姿态的脚底和鞋底图像,每张图像标识有类别标签和姿态标签;The robot is used to collect a set of sample images. The sample images include facial images, foot images and leg images associated with each foot image. The facial images include images with different shooting distances and different facial angles and postures, so The above-mentioned foot images include images of soles and shoe soles with different bottom orientations, and each image is marked with a category label and posture label;
利用所述样本图像集合训练深度神经网络,得到所述目标识别模型。Use the sample image set to train a deep neural network to obtain the target recognition model.
在一个优选例中,所述机器人是贴地移动的机器人。 In a preferred example, the robot is a robot that moves close to the ground.
在一个优选例中,所述机器人是扫地机器人。In a preferred example, the robot is a sweeping robot.
本申请还公开了一种机器人包括:This application also discloses a robot including:
目标部位检测模块,用于在机器人进行位置移动时执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态,以及根据所述第一目标部位的位置和姿态控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置;The target part detection module is used to detect the first target part of the sole or shoe sole when the robot moves, and record the detected position and posture of the first target part, and record the position and posture of the first target part according to the position and position of the first target part. Attitude controls the robot to move its position and/or change its attitude to perform detection of other target parts, and record the detected positions of other target parts;
跌倒判断模块,用于根据检测到的所述目标部位的位置判断当前对象是否为跌倒状态。A fall determination module is used to determine whether the current object is in a fallen state according to the detected position of the target part.
在一个优选例中,所述检测到的目标部位的位置分别为其重心位置;In a preferred example, the positions of the detected target parts are respectively the positions of their centers of gravity;
所述跌倒判断模块还用于判断所有所述目标部位的重心位置的连线是否趋于同一直线或同一平面且该直线或平面与地面的夹角小于阈值角度,若是则判定当前对象为跌倒状态。The fall determination module is also used to determine whether the lines connecting the center of gravity positions of all the target parts tend to be in the same straight line or the same plane and the angle between the straight line or the plane and the ground is less than a threshold angle. If so, it is determined that the current object is in a fallen state. .
在一个优选例中,所述目标部位检测模块还用于根据所述第一目标部位的位置和姿态判断当前对象疑似跌倒,并估计所述疑似跌倒对象的其他目标部位的位置,以及根据估计的位置控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。In a preferred example, the target part detection module is also used to determine whether the current object is suspected of falling based on the position and posture of the first target part, and to estimate the positions of other target parts of the suspected falling object, and based on the estimated The position control robot moves the position and/or changes the attitude to perform detection of other target parts, and records the detected positions of other target parts.
在一个优选例中,所述目标部位检测模块还用于当检测到所述其他目标部位时,连续采集多帧所述其他部位的数据并记录每帧所述其他目标部位的位置;和,In a preferred example, the target part detection module is also configured to continuously collect multiple frames of data of the other target parts and record the position of the other target parts in each frame when the other target parts are detected; and,
所述跌倒判断模块还用于根据检测到的所述目标部位的位置的变化判断当前对象是否为跌倒状态,若所述每帧其他目标部位的位置在预设范围内变化则判定当前对象为跌倒状态。The fall determination module is also used to determine whether the current object is in a fallen state based on the detected change in the position of the target part. If the positions of other target parts in each frame change within a preset range, it is determined that the current object is in a fall state. state.
在一个优选例中,所述目标部位检测模块还用于根据所述第一目标部位的位置和姿态控制所述机器人环绕与所述第一目标部位相关联的对象的轮廓 进行N圈位置移动,并对应执行N次所述目标部位的检测和记录每次检测到的目标部位的位置;和,In a preferred example, the target part detection module is also used to control the robot to surround the outline of an object associated with the first target part according to the position and attitude of the first target part. Carry out N circle position movements, and perform corresponding detection of the target part N times and record the position of the target part detected each time; and,
所述跌倒判断模块还用于如果检测到的N次目标部位的位置均相同或均在预设范围内变化,则判定当前对象为跌倒状态。The fall determination module is also used to determine that the current object is in a fallen state if the positions of the target parts detected N times are all the same or change within a preset range.
在一个优选例中,所述机器人还包括双目视觉单元,用于采集所述脚底或鞋底的深度信息;In a preferred example, the robot further includes a binocular vision unit for collecting depth information of the soles of the feet or shoe soles;
所述目标部位检测模块还用于如果检测到脚底或鞋底,则基于所述双目视觉单元采集到的所述脚底或鞋底的深度信息确定所述脚底或鞋底的尺寸,以及判断所述脚底或鞋底的尺寸是否满足预设条件,若满足则判定所述脚底或鞋底为第一目标部位并记录所述第一目标部位的位置,否则继续执行所述第一目标部位的检测。The target part detection module is also configured to determine the size of the sole or sole based on the depth information of the sole or sole collected by the binocular vision unit if the sole or sole is detected, and determine the size of the sole or sole. Whether the size of the shoe sole meets the preset conditions, if so, the sole or sole is determined to be the first target part and the position of the first target part is recorded; otherwise, the detection of the first target part is continued.
在一个优选例中,所述其他目标部位包括脸部和/或腿部。In a preferred example, the other target parts include the face and/or legs.
在一个优选例中,所述目标部位检测模块还用于在执行其他目标部位的检测时,记录检测到的其他目标部位的姿态;In a preferred example, the target part detection module is also used to record the detected postures of other target parts when performing detection of other target parts;
所述跌倒判断模块还用于根据检测到的所述目标部位的位置和姿态判断当前对象是否为跌倒状态。The fall determination module is also configured to determine whether the current object is in a fallen state based on the detected position and posture of the target part.
在一个优选例中,所述系统还包括图像采集单元,所述目标部位检测模块还包括目标识别模型;In a preferred example, the system further includes an image acquisition unit, and the target part detection module further includes a target recognition model;
所述图像采集单元用于采集样本图像集合,所述样本图像包括面部图像、脚部图像和与每张脚部图像关联的腿部图像,所述面部图像包含不同拍摄距离和不同面部视角姿态的图像,所述脚部图像包含不同底部朝向姿态的脚底和鞋底图像,每张图像标识有类别标签和姿态标签,所述目标识别模块利用所述样本图像集合训练深度神经网络得到;The image acquisition unit is used to collect a set of sample images. The sample images include facial images, foot images and leg images associated with each foot image. The facial images include images from different shooting distances and different facial angles and postures. Image, the foot image includes sole and sole images of different bottom orientation postures, each image is marked with a category label and posture label, and the target recognition module is obtained by training a deep neural network using the sample image set;
所述图像采集单元还用于在机器人进行位置移动时采集图像,将所采集 的图像输入所述目标识别模型以进行所述目标部位的检测。The image acquisition unit is also used to collect images when the robot moves its position, and the collected images are The image is input into the target recognition model to detect the target part.
本申请实施方式中,与现有技术相比,至少包括以下优点和有益效果:本发明针对贴地移动的机器人(例如扫地机器人)特有的低视角,提出专用于机器人的低视角图像处理和识别方法,实现了机器人对跌倒的自动检测识别,在兼顾清扫的同时可以更好的对家庭成员(特别是老人和孩子群体)的安全进行监测。将脚部和腿部关联组合数据以及脸部数据作为样本数据,并将类别和姿态作为标签来训练模型,训练得到的模型可以直接识别目标部位的类别和姿态,进而能够有效地判断家庭成员的跌倒情况。基于检测到脸部、腿部、脚部的目标部位的位置、位置变化或者位置和姿态的组合判断判断跌倒情况,准确度高。Compared with the existing technology, the embodiments of the present application include at least the following advantages and beneficial effects: Aiming at the unique low viewing angle of robots that move close to the ground (such as sweeping robots), the present invention proposes low-view image processing and recognition dedicated to robots. This method realizes the automatic detection and recognition of falls by the robot, and can better monitor the safety of family members (especially the elderly and children) while taking into account cleaning. Use the associated combined data of feet and legs and face data as sample data, and use categories and postures as labels to train the model. The trained model can directly identify the category and posture of the target part, and thus can effectively determine the identity of family members. Fall situations. The fall situation is determined based on the detected position, position change, or combination of position and posture of the target parts of the face, legs, and feet, with high accuracy.
本申请的说明书中记载了大量的技术特征,分布在各个技术方案中,如果要罗列出本申请所有可能的技术特征的组合(即技术方案)的话,会使得说明书过于冗长。为了避免这个问题,本申请上述发明内容中公开的各个技术特征、在下文各个实施方式和例子中公开的各技术特征、以及附图中公开的各个技术特征,都可以自由地互相组合,从而构成各种新的技术方案(这些技术方案均因视为在本说明书中已经记载),除非这种技术特征的组合在技术上是不可行的。例如,在一个例子中公开了特征A+B+C,在另一个例子中公开了特征A+B+D+E,而特征C和D是起到相同作用的等同技术手段,技术上只要择一使用即可,不可能同时采用,特征E技术上可以与特征C相组合,则,A+B+C+D的方案因技术不可行而应当不被视为已经记载,而A+B+C+E的方案应当视为已经被记载。The description of this application records a large number of technical features, which are distributed in various technical solutions. If we want to list all possible combinations of technical features (ie, technical solutions) of this application, the description will be too lengthy. In order to avoid this problem, the technical features disclosed in the above-mentioned summary of the present invention, the technical features disclosed in the various embodiments and examples below, and the technical features disclosed in the drawings can be freely combined with each other to form Various new technical solutions (these technical solutions are deemed to have been recorded in this specification), unless this combination of technical features is technically unfeasible. For example, feature A+B+C is disclosed in one example, and feature A+B+D+E is disclosed in another example. However, features C and D are equivalent technical means that play the same role. Technically, as long as you choose It can be used as soon as it is used, and it is impossible to use it at the same time. Feature E can technically be combined with feature C. Then, the solution A+B+C+D should not be regarded as documented because it is technically unfeasible, while A+B+ C+E's plan should be deemed to have been documented.
附图说明Description of the drawings
图1是根据本申请第一实施方式的机器人识别跌倒的方法流程示意图。Figure 1 is a schematic flowchart of a method for identifying falls by a robot according to the first embodiment of the present application.
图2是根据本申请一个实施例的机器人识别跌倒的方法流程示意图。 Figure 2 is a schematic flowchart of a method for a robot to identify falls according to an embodiment of the present application.
图3是根据本申请另一个实施例的机器人识别跌倒的方法流程示意图。Figure 3 is a schematic flowchart of a method for a robot to identify falls according to another embodiment of the present application.
图4是根据本申请第二实施方式的机器人结构示意图。Figure 4 is a schematic structural diagram of a robot according to the second embodiment of the present application.
具体实施方式Detailed ways
在以下的叙述中,为了使读者更好地理解本申请而提出了许多技术细节。但是,本领域的普通技术人员可以理解,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请所要求保护的技术方案。In the following description, many technical details are provided to enable readers to better understand this application. However, those of ordinary skill in the art can understand that the technical solution claimed in this application can be implemented even without these technical details and various changes and modifications based on the following embodiments.
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请的实施方式作进一步地详细描述。In order to make the purpose, technical solutions and advantages of the present application clearer, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
本申请的第一实施方式涉及一种机器人识别跌倒的方法,其流程如图1所示,该方法包括以下步骤:The first embodiment of the present application relates to a method for a robot to identify falls. The process is shown in Figure 1. The method includes the following steps:
步骤101:机器人进行位置移动,执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态;Step 101: The robot moves its position, detects the first target part of the sole or sole, and records the position and posture of the detected first target part;
步骤102:根据该第一目标部位的位置和姿态控制该机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置;Step 102: Control the robot to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts;
步骤103:根据检测到的该目标部位的位置判断当前对象是否为跌倒状态。Step 103: Determine whether the current object is in a fallen state according to the detected position of the target part.
具体描述如下:The specific description is as follows:
在步骤101中,机器人进行位置移动,执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态。机器人可以是贴地移动的机器人,其摄像头的位置靠近地面。在一个实施例中,机器人可以是通常用于室内清扫的扫地机器人,其高度通常在20厘米以下。 In step 101, the robot performs position movement, performs detection of the first target part of the sole or shoe sole, and records the position and posture of the detected first target part. The robot can be a robot that moves close to the ground, with its camera positioned close to the ground. In one embodiment, the robot may be a sweeping robot usually used for indoor cleaning, and its height is usually less than 20 cm.
可选地,步骤101中执行脚底或鞋底的第一目标部位检测时,还包括以下步骤:Optionally, when performing the detection of the first target part of the sole or shoe sole in step 101, the following steps are also included:
如果检测到脚底或鞋底,则利用该机器人的双目视觉单元采集该脚底或鞋底的深度信息,基于采集的深度信息确定该脚底或鞋底的尺寸;If a foot sole or shoe sole is detected, the robot's binocular vision unit is used to collect depth information of the foot sole or shoe sole, and the size of the foot sole or shoe sole is determined based on the collected depth information;
判断该脚底或鞋底的尺寸是否满足预设条件,若满足则判定该脚底或鞋底为第一目标部位并记录该第一目标部位的位置,否则继续进行位置移动和执行脚底或鞋底的第一目标部位的检测。Determine whether the size of the sole of the foot or sole meets the preset conditions. If so, determine the sole of the foot or sole as the first target part and record the position of the first target part. Otherwise, continue to move the position and execute the first goal of the sole of the foot or sole. Detection of parts.
在步骤102中,根据该第一目标部位的位置和姿态控制该机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。In step 102, the robot is controlled to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts.
其中,步骤102中该其他目标部位可以是人身体上的任何部位,该其他目标部位例如包括脸部和/或腿部。Wherein, the other target parts in step 102 can be any part on the human body, and the other target parts include, for example, the face and/or the legs.
可选地,该步骤102可以进一步包括以下子步骤102a和102b:Optionally, this step 102 may further include the following sub-steps 102a and 102b:
步骤102a:根据该第一目标部位的位置和姿态判断当前对象疑似跌倒,并估计该疑似跌倒对象的其他目标部位的位置;Step 102a: Determine that the current object is suspected of falling based on the position and posture of the first target part, and estimate the positions of other target parts of the suspected fallen object;
步骤102b:根据估计的位置控制该机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。Step 102b: Control the robot to move the position and/or change the posture according to the estimated position to perform detection of other target parts, and record the detected positions of other target parts.
可选地,机器人检测目标部位的位置的实现方式,例如但不限于可以利用双目视觉单元对于目标部位的视差图,获取到目标部位的位置信息。Optionally, the robot can detect the position of the target part in an implementation manner. For example, but not limited to, the binocular vision unit can use a disparity map of the target part to obtain the position information of the target part.
在步骤103中,根据检测到的该目标部位的位置判断当前对象是否为跌倒状态。In step 103, it is determined whether the current object is in a fallen state according to the detected position of the target part.
步骤103的具体实现方式多种多样。在一个实施例中,该检测到的目标部位的位置分别为其重心位置;在该可选实施例中,步骤103可以进一步包括以下步骤:判断所有该目标部位的重心位置的连线是否趋于同一直线或同 一平面且该直线或平面与地面的夹角小于阈值角度,若是则判定当前对象为跌倒状态。Step 103 can be implemented in various specific ways. In one embodiment, the detected positions of the target parts are respectively the positions of their centers of gravity; in this optional embodiment, step 103 may further include the following steps: determining whether the lines connecting the positions of the centers of gravity of all the target parts tend to be same straight line or same A plane and the angle between the straight line or plane and the ground is less than the threshold angle. If so, the current object is determined to be in a fallen state.
在另一个实施例中,步骤102之后还可以包括以下步骤:当检测到该其他目标部位时,连续采集多帧该其他目标部位的数据并记录每帧该其他目标部位的位置;以及,步骤103可以进一步实现为:根据检测到的该其他目标部位的位置的变化判断当前对象是否为跌倒状态,例如但不限于若检测到每帧其他目标部位的位置在预设范围内变化则判定当前对象为跌倒状态。In another embodiment, the following steps may be included after step 102: when the other target part is detected, continuously collecting multiple frames of data of the other target part and recording the position of the other target part in each frame; and, step 103 It can be further implemented as: judging whether the current object is in a fallen state based on the detected changes in the positions of the other target parts. For example, but not limited to, if it is detected that the positions of the other target parts change within a preset range in each frame, then it is judged that the current object is in a fallen state. Falling state.
在又一个实施例中,步骤102可以进一步实现为:根据该第一目标部位的位置和姿态控制该机器人环绕与该第一目标部位相关联的对象的轮廓进行N圈位置移动,并对应执行N次该目标部位的检测和记录每次检测到的目标部位的位置;以及,步骤103可以进一步包括以下步骤:如果检测到的N次目标部位的位置均相同或均在预设范围内变化,则判定当前对象为跌倒状态。In yet another embodiment, step 102 may be further implemented as: controlling the robot to perform N circle position movements around the outline of the object associated with the first target part according to the position and attitude of the first target part, and correspondingly executing N Detect the target part several times and record the position of the target part detected each time; and step 103 may further include the following steps: If the positions of the target part detected N times are all the same or change within a preset range, then Determine the current object to be in a fallen state.
在再一个实施例中,步骤102还可以包括以步骤:在执行其他目标部位的检测时,记录检测到的其他目标部位的姿态;以及,步骤103可以进一步实现为:根据检测到的该目标部位的位置和姿态判断当前对象是否为跌倒状态。In yet another embodiment, step 102 may also include the step of: recording the detected postures of other target parts when detecting other target parts; and step 103 may be further implemented as: according to the detected target part The position and posture of the object are used to determine whether the current object is in a fallen state.
可选地,该机器人利用目标识别模型进行该目标部位的检测。Optionally, the robot uses a target recognition model to detect the target part.
可选地,该目标识别模型根据以下方式训练得到:利用该机器人采集样本图像集合,该样本图像包括面部图像、脚部图像和与每张脚部图像关联的腿部图像,该面部图像包含不同拍摄距离和不同面部视角姿态的图像,该脚部图像包含不同底部朝向姿态的脚底和鞋底图像,每张图像标识有类别标签;利用该样本图像集合训练深度神经网络,得到该目标识别模型。可选地,每张图像还可以标识有姿态标签,即同时标识类别标签和姿态标签。Optionally, the target recognition model is trained in the following manner: using the robot to collect a sample image set, the sample image includes a face image, a foot image and a leg image associated with each foot image, and the face image contains different Images taken at different distances and different facial angles and postures. The foot images include images of soles and shoe soles with different bottom orientations. Each image is marked with a category label. This sample image collection is used to train a deep neural network to obtain the target recognition model. Optionally, each image can also be identified with a pose label, that is, both a category label and a pose label are identified.
进一步地,考虑到扫地机器人的视角一般均较低,基本都离地面6-7cm左右或更低,在这种低视角下,距离远时,看到的人体不够清晰,无法准确 识别;距离近时,人体不能完全在视角内,常规以人脸作为检测或整个人体作为训练的模型均不能适用于扫地机器人。可选地,采用脚部和腿部样本数据组合的方式作为训练的主样本数据,面部数据作为辅助样本数据的方式进行模型的训练。Furthermore, considering that the viewing angle of sweeping robots is generally low, basically about 6-7cm or lower from the ground, at such a low viewing angle, when the distance is far, the human body seen is not clear enough and cannot be accurately seen. Recognition; when the distance is close, the human body cannot be completely within the viewing angle. Conventional models that use the human face as a detection or the entire human body as a training model cannot be applied to sweeping robots. Optionally, the combination of foot and leg sample data is used as the main sample data for training, and the facial data is used as the auxiliary sample data to train the model.
并且,上述利用该扫地机器人采集样本图像集合时,对于样本数据中面部图像数据采集,考虑到扫地机器人视角中看到的人脸一般为离摄像头距离很近,低视角,倾斜态或只有局部耳朵或半张脸轮廓特征;针对这一特点或局限性,图片采集例如进行远距离采、超近距离、平躺视角姿态或面部朝下视角姿态的采集。并且,例如在采集到照片后,对其进行图像的处理:抠图,获取面部特征;对面部特征进行放大;对放大的面部特征进行图像扩增。对于样本数据中脚部和腿部图像数据采集,扫地机器人视角中看到的脚部特征需要分为两种类型:有鞋子和无鞋子。两种类型的数据均需要进行常规角度,异常角度以及底部朝上的采集和数据增强,进而进行模型训练。在识别过程中会结合双目得到的深度信息进行进一步的判断,通常脚部的特征宽度在一定的范围内(31cm),若双目得到的物体宽度信息超过这个数值则认为该物体不是“脚部”。扫地机器人视角中看到的腿部特征,在距离较远时与家具腿之类的物体的比较相似,腿部特征在距离扫地机器人较近时,会与底座类的物体特征接近。因此对于腿部的特征,检测到后需同步判断是否有关联的脚部特征,通常脚部特征与腿部特征为关联状态,一旦标记为腿部特征后,将会对该腿部所在的位置进行记录和保存。Moreover, when using the sweeping robot to collect the sample image set, for the collection of facial image data in the sample data, consider that the human face seen from the sweeping robot's perspective is generally very close to the camera, has a low viewing angle, is tilted, or has only partial ears. Or half-face outline features; to address this feature or limitation, image collection can be performed, for example, at long distances, ultra-close ranges, lying-down angles, or face-down angles. And, for example, after collecting the photos, perform image processing on them: cut out the pictures to obtain facial features; enlarge the facial features; and perform image amplification on the enlarged facial features. For the collection of foot and leg image data in the sample data, the foot features seen from the perspective of the sweeping robot need to be divided into two types: with shoes and without shoes. Both types of data require regular angle, abnormal angle, and bottom-up collection and data enhancement for model training. During the recognition process, the depth information obtained by the binoculars will be combined for further judgment. Usually the characteristic width of the feet is within a certain range (31cm). If the width information of the object obtained by the binoculars exceeds this value, the object is considered not to be a "foot". department". The leg features seen from the perspective of the sweeping robot are similar to those of objects such as furniture legs when the distance is far away. When the leg features are close to the sweeping robot, they are close to the features of objects such as bases. Therefore, for the characteristics of the legs, it is necessary to synchronously determine whether there are associated foot characteristics after detection. Usually the foot characteristics are associated with the leg characteristics. Once marked as leg characteristics, the location of the leg will be Record and save.
可选地,上述模型训练前可以对数据进行处理。具体的,机器人所用的主控芯片一般只支持较低的算力,有的主控芯片甚至不带NPU,只能借助CPU进行一些模型的加载计算。因此在模型训练时必须兼顾模型的算力消耗和精准度两个维度。模型训练首先需要对采集到的数据进行预处理,也就是所谓的数据清理,即对数据进行各种检查和审查以纠正失值,使数值正常化/标准化以使其具有可比性。在得到处理后的数据后,再进行数据标注,例如标注 已设定的需要识别的物体类别。例如可以对数据集进行分割,分割为3部分,其中较大的数据子集作为训练集,占原始数据的95%,第二部分通常为较小的子集,用于做模型的验证,第三部分通常也为较小的子集,用于做模型的测试。Optionally, the data can be processed before training the above model. Specifically, the main control chips used by robots generally only support low computing power. Some main control chips do not even have an NPU and can only use the CPU to load and calculate some models. Therefore, when training the model, we must take into account the two dimensions of the model's computing power consumption and accuracy. Model training first requires preprocessing of the collected data, which is called data cleaning, that is, various checks and reviews of the data are performed to correct missing values and normalize/standardize the values to make them comparable. After obtaining the processed data, perform data labeling, such as labeling The set object categories to be recognized. For example, the data set can be divided into 3 parts. The larger data subset is used as the training set, accounting for 95% of the original data. The second part is usually a smaller subset used for model verification. The three parts are usually smaller subsets used for model testing.
可选地,可以将目标识别模型作为一个单独的自学习模型进行训练和识别,其他障碍物的检测识别为另一个无自学习能力的模型。因为,机器人的视角较低,通常会影响到机器人工作的障碍物或物体一般尺寸相对比较小,但是对于可识别跌倒的机器人需识别部分人体特征,因此在模型选择时,需要兼顾两个不同的场景。因此,本发明采用两个模型叠加的方式。基于两个叠加模型系统,对于识别到的物体也会有相应的流程进行后续处理,整体的流程图如下:首先,机器人会进行室内物体的目标检测,在检测常规障碍物时会调用obstacle detection model,在机器人检测到该模型中训练的物体后,机器人会记录并标记该类障碍物的信息,然后将该信息更新到机器人的地图中,机器人根据该障碍物的特征会做出相应的动作,本发明中基本为绕开该障碍物。若未识别到模型中定义的类别的障碍物,则机器人继续清扫并进行目标检测。对于跌倒的检测,机器人会采用face detection model进行识别,跌倒的判断条件为同时检测到“脸部”“腿部”“脚部”特征,且该三个区域的重心基于处于同一水平线。机器人在检测到跌倒信号后,将触发机器人上的报警模块,发出报警信号。Alternatively, the target recognition model can be trained and recognized as a separate self-learning model, and the detection and recognition of other obstacles can be treated as another model without self-learning capabilities. Because the robot's viewing angle is low, obstacles or objects that usually affect the robot's work are generally relatively small in size. However, a robot that can recognize falls needs to recognize some human body features. Therefore, when selecting a model, two different factors need to be taken into consideration. Scenes. Therefore, the present invention adopts the method of superimposing two models. Based on two superimposed model systems, there will also be corresponding processes for subsequent processing of recognized objects. The overall flow chart is as follows: First, the robot will perform target detection of indoor objects, and will call the obstacle detection model when detecting regular obstacles. , after the robot detects the object trained in the model, the robot will record and mark the information of this type of obstacle, and then update the information to the robot's map. The robot will make corresponding actions based on the characteristics of the obstacle. In the present invention, the obstacle is basically bypassed. If no obstacles of the category defined in the model are identified, the robot continues cleaning and performs target detection. For fall detection, the robot will use the face detection model for identification. The condition for falling is that the features of "face", "legs" and "feet" are detected at the same time, and the centers of gravity of these three areas are based on the same horizontal line. After the robot detects the fall signal, it will trigger the alarm module on the robot and send out an alarm signal.
本发明中,在识别到家庭成员的跌倒状态后,例如可以发送报警信号或报警信息到其他家庭成员的手机,例如但不限于以手机短信的方式发送报警信息。In the present invention, after recognizing the fall status of a family member, for example, an alarm signal or alarm information can be sent to the mobile phones of other family members, such as but not limited to sending alarm information in the form of mobile phone text messages.
如图2和图3所示,图2示出了本申请一个实施例的机器人识别跌倒的方法流程图,图3示出了本申请另一个实施例的机器人识别跌倒的方法流程图。这两个实施例中罗列的细节主要是为了便于理解,不作为对本申请保护 范围的限制。As shown in Figures 2 and 3, Figure 2 shows a flow chart of a method for a robot to recognize a fall according to one embodiment of the present application, and Figure 3 shows a flow chart of a method for a robot to recognize a fall according to another embodiment of the present application. The details listed in these two embodiments are mainly for ease of understanding and are not intended to limit the protection of this application. Scope limitations.
本申请的第二实施方式涉及一种机器人,其结构如图4所示,该机器人包括目标部位检测模块和跌倒判断模块。The second embodiment of the present application relates to a robot, the structure of which is shown in Figure 4. The robot includes a target part detection module and a fall judgment module.
该目标部位检测模块用于在机器人进行位置移动时执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态,以及根据该第一目标部位的位置和姿态控制该机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置;该跌倒判断模块用于根据检测到的该目标部位的位置判断当前对象是否为跌倒状态。The target part detection module is used to detect the first target part of the sole or shoe sole when the robot moves position, and record the position and attitude of the detected first target part, and the position and attitude according to the first target part. Control the robot to move the position and/or change the posture to perform detection of other target parts, and record the detected positions of other target parts; the fall determination module is used to determine whether the current object has fallen based on the detected position of the target part. state.
可选地,该目标部位检测模块还用于根据该第一目标部位的位置和姿态判断当前对象疑似跌倒,并估计该疑似跌倒对象的其他目标部位的位置,以及根据估计的位置控制该机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。Optionally, the target part detection module is also used to determine whether the current object is suspected of falling based on the position and attitude of the first target part, estimate the positions of other target parts of the suspected fallen object, and control the movement of the robot according to the estimated positions. position and/or change posture to perform detection of other target parts, and record the detected positions of other target parts.
可选地,该机器人还包括双目视觉单元,该机器人例如但不限于利用双目视觉单元对于目标部位的视差图,获取到目标部位的位置信息。Optionally, the robot further includes a binocular vision unit, and the robot obtains the position information of the target part by using, for example, but not limited to, the disparity map of the target part using the binocular vision unit.
可选地,双目视觉单元还用于采集该脚底或鞋底的深度信息;该目标部位检测模块还用于如果检测到脚底或鞋底,则基于该双目视觉单元采集到的该脚底或鞋底的深度信息确定该脚底或鞋底的尺寸,以及判断该脚底或鞋底的尺寸是否满足预设条件,若满足则判定该脚底或鞋底为第一目标部位并记录该第一目标部位的位置,否则继续执行该第一目标部位的检测。Optionally, the binocular vision unit is also used to collect the depth information of the sole of the foot or the sole of the shoe; the target part detection module is also used to: if the sole of the foot or the sole of the shoe is detected, based on the depth information of the sole of the foot or the sole of the shoe collected by the binocular vision unit The depth information determines the size of the sole of the foot or sole, and determines whether the size of the sole of the foot or sole meets the preset conditions. If so, determines the sole of the foot or sole as the first target part and records the position of the first target part. Otherwise, continue execution. Detection of the first target site.
本申请中该的其他目标部位可以是人身体上的任何部位,该其他目标部位例如包括脸部和/或腿部。The other target parts in this application can be any part on the human body, and the other target parts include, for example, the face and/or the legs.
可选地,该系统还包括图像采集单元,并且该目标部位检测模块还包括目标识别模型。一方面该图像采集单元用于采集样本图像集合,该样本图像包括面部图像、脚部图像和与每张脚部图像关联的腿部图像,该面部图像包含不同拍摄距离和不同面部视角姿态的图像,该脚部图像包含不同底部朝向 姿态的脚底和鞋底图像,每张图像标识有类别标签和姿态标签,该目标识别模块利用该样本图像集合训练深度神经网络得到;另一方面该图像采集单元还用于在机器人进行位置移动时采集图像,将所采集的图像输入该目标识别模型以进行该目标部位的检测。Optionally, the system further includes an image acquisition unit, and the target part detection module further includes a target recognition model. On the one hand, the image acquisition unit is used to collect a sample image set. The sample image includes a facial image, a foot image, and a leg image associated with each foot image. The facial image includes images of different shooting distances and different facial perspective postures. , the foot image contains different bottom orientations Images of soles and soles of postures. Each image is marked with a category label and posture label. The target recognition module uses the sample image collection to train a deep neural network. On the other hand, the image acquisition unit is also used to collect images when the robot moves. Image, input the collected image into the target recognition model to detect the target part.
在一个实施例中,该检测到的目标部位的位置分别为其重心位置;该跌倒判断模块还用于判断所有该目标部位的重心位置的连线是否趋于同一直线或同一平面且该直线或平面与地面的夹角小于阈值角度,若是则判定当前对象为跌倒状态。In one embodiment, the detected positions of the target parts are respectively the positions of their centers of gravity; the fall determination module is also used to determine whether the lines connecting the positions of the centers of gravity of all the target parts tend to be the same straight line or the same plane and the straight line or The angle between the plane and the ground is less than the threshold angle. If so, the current object is determined to be in a fallen state.
在另一个实施例中,该目标部位检测模块还用于当检测到该其他目标部位时,连续采集多帧该其他部位的数据并记录每帧该其他目标部位的位置;以及,该跌倒判断模块还用于根据检测到的该目标部位的位置的变化判断当前对象是否为跌倒状态,若该每帧其他目标部位的位置在预设范围内变化则判定当前对象为跌倒状态。In another embodiment, the target part detection module is also configured to continuously collect multiple frames of data of the other target part and record the position of the other target part in each frame when the other target part is detected; and the fall determination module It is also used to determine whether the current object is in a fallen state based on the detected change in the position of the target part. If the positions of other target parts in each frame change within a preset range, it is determined that the current object is in a fallen state.
在又一个实施例中,该目标部位检测模块还用于根据该第一目标部位的位置和姿态控制该机器人环绕与该第一目标部位相关联的对象的轮廓进行N圈位置移动,并对应执行N次该目标部位的检测和记录每次检测到的目标部位的位置;以及,该跌倒判断模块还用于如果检测到的N次目标部位的位置均相同或均在预设范围内变化,则判定当前对象为跌倒状态。In yet another embodiment, the target part detection module is also used to control the robot to perform N-circle position movements around the outline of the object associated with the first target part according to the position and attitude of the first target part, and perform corresponding execution Detect the target part N times and record the position of the target part detected each time; and, the fall judgment module is also used to if the positions of the target part detected N times are all the same or change within a preset range, then Determine the current object to be in a fallen state.
在再一个实施例中,该目标部位检测模块还用于在执行其他目标部位的检测时,记录检测到的其他目标部位的姿态;以及,该跌倒判断模块还用于根据检测到的该目标部位的位置和姿态判断当前对象是否为跌倒状态。In yet another embodiment, the target part detection module is also used to record the detected postures of other target parts when performing the detection of other target parts; and the fall determination module is also used to record the detected postures of the target part according to the detected target part. The position and posture of the object are used to determine whether the current object is in a fallen state.
第一实施方式是与本实施方式相对应的方法实施方式,第一实施方式中的技术细节可以应用于本实施方式,本实施方式中的技术细节也可以应用于第一实施方式。The first embodiment is a method implementation corresponding to this embodiment. The technical details in the first embodiment can be applied to this embodiment, and the technical details in this embodiment can also be applied to the first embodiment.
需要说明的是,本发明该的机器人可以包括现有机器人的所有功能模块 和单元,例如但不限于可以包括电池单元、电源管理和发电机驱动器、电机单元、传感器矩阵单元、存储单元、导航和定位单元、移动调制解调器单元、wifi单元以及控制单元等。It should be noted that the robot of the present invention can include all functional modules of existing robots. and units, such as but not limited to, may include battery units, power management and generator drives, motor units, sensor matrix units, storage units, navigation and positioning units, mobile modem units, wifi units, control units, etc.
需要说明的是,本领域技术人员应当理解,上述机器人的实施方式中所示的各模块的实现功能可参照前述机器人识别跌倒的方法的相关描述而理解。上述机器人的实施方式中所示的各模块的功能可通过运行于处理器上的程序(可执行指令)而实现,也可通过具体的逻辑电路而实现。本申请实施例上述机器人如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。It should be noted that those skilled in the art should understand that the implementation functions of each module shown in the embodiment of the robot can be understood with reference to the relevant description of the method for identifying a fall by the robot. The functions of each module shown in the above embodiments of the robot can be implemented by programs (executable instructions) running on the processor, or by specific logic circuits. If the above-mentioned robot in the embodiment of the present application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or those that contribute to the existing technology. The computer software products are stored in a storage medium and include a number of instructions to A computer device (which may be a personal computer, a server, a network device, etc.) is caused to execute all or part of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read Only Memory), magnetic disk or optical disk and other media that can store program code. As such, embodiments of the present application are not limited to any specific combination of hardware and software.
相应地,本申请的实施方式还提供一种计算机可读存储介质,其中存储有计算机可执行指令,该计算机可执行指令被处理器执行时实现本申请的各方法实施方式。计算机可读存储介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于,相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访 问的信息。按照本文中的界定,计算机可读存储介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Correspondingly, embodiments of the present application also provide a computer-readable storage medium in which computer-executable instructions are stored. When the computer-executable instructions are executed by a processor, the method implementations of the present application are implemented. Computer-readable storage media includes permanent and non-transitory, removable and non-removable media and may be implemented by any method or technology to store information. Information may be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, disk storage or other magnetic storage devices or any other non-transmission medium that can be used for storage that can be accessed by a computing device Ask for information. As defined in this article, computer-readable storage media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
此外,本申请的实施方式还提供一种机器人,其中包括用于存储计算机可执行指令的存储器,以及,处理器;该处理器用于在执行该存储器中的计算机可执行指令时实现上述各方法实施方式中的步骤。其中,该处理器可以是中央处理器(Central Processing Unit,简称“CPU”)、图像处理器(Graphic Processing Unit,简称“GPU”)、数字信号处理器(Digital Signal Processor,简称“DSP”)、微控制单元(Microcontroller Unit,简称“MCU”)、神经网络处理器(简称“NPU”)、专用集成电路(Application Specific Integrated Circuit,简称“ASIC”)、现成可编程门阵列(Field Programmable Gate Array,简称“FPGA”)或者其他可编程逻辑器件等。前述的存储器可以是只读存储器(read-only memory,简称“ROM”)、随机存取存储器(random access memory,简称“RAM”)、快闪存储器(Flash)、硬盘或者固态硬盘等。本发明各实施方式所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。In addition, embodiments of the present application also provide a robot, which includes a memory for storing computer-executable instructions, and a processor; the processor is used to implement the above methods when executing the computer-executable instructions in the memory. steps in the way. Among them, the processor can be a central processing unit (Central Processing Unit, referred to as "CPU"), a graphics processor (Graphic Processing Unit, referred to as "GPU"), a digital signal processor (Digital Signal Processor, referred to as "DSP"), Microcontroller Unit (MCU), Neural Network Processor (NPU), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array, (referred to as "FPGA") or other programmable logic devices, etc. The aforementioned memory can be read-only memory (read-only memory, referred to as "ROM"), random access memory (random access memory, referred to as "RAM"), flash memory (Flash), hard disk or solid state drive, etc. The steps of the method disclosed in each embodiment of the present invention can be directly implemented by a hardware processor, or can be executed by a combination of hardware and software modules in the processor.
需要说明的是,在本专利的申请文件中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。本专利的申请文件中,如果提到根据某要素执行某行为,则是指至少根据该要素执行该行为的意思,其中包括了两种情况:仅根据该要素执行该行为、和根据该要素和其它要素执行该行为。多个、多次、多种 等表达包括2个、2次、2种以及2个以上、2次以上、2种以上。It should be noted that in the application documents of this patent, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these There is no such actual relationship or sequence between entities or operations. Furthermore, the terms "comprises,""comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment. Without further limitation, an element defined by the statement "comprises a" does not exclude the presence of additional identical elements in a process, method, article, or device that includes the stated element. In the application documents of this patent, if it is mentioned that an act is performed based on a certain element, it means that the act is performed based on at least that element, which includes two situations: performing the act based on that element only, and performing the act based on both that element and Other elements perform this behavior. Multiple, many times, various Expressions include 2, 2 times, 2 kinds, and 2 or more, 2 or more times, and 2 or more kinds.
在本申请提及的所有文献都被认为是整体性地包括在本申请的公开内容中,以便在必要时可以作为修改的依据。此外应理解,以上所述仅为本说明书的较佳实施例而已,并非用于限定本说明书的保护范围。凡在本说明书一个或多个实施例的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例的保护范围之内。 All documents mentioned in this application are considered to be included in the disclosure of this application in their entirety so as to serve as a basis for modifications when necessary. In addition, it should be understood that the above descriptions are only preferred embodiments of this specification and are not intended to limit the scope of protection of this specification. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of one or more embodiments of this specification shall be included in the protection scope of one or more embodiments of this specification.

Claims (13)

  1. 一种机器人识别跌倒的方法,其特征在于,包括以下步骤:A method for a robot to identify falls, which is characterized by including the following steps:
    A机器人进行位置移动,执行脚底或鞋底的第一目标部位的检测,并记录检测到的第一目标部位的位置和姿态;Robot A moves position, detects the first target part of the foot or sole, and records the position and attitude of the detected first target part;
    B根据所述第一目标部位的位置和姿态控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置;B. Control the robot to move the position and/or change the posture according to the position and attitude of the first target part to perform detection of other target parts, and record the detected positions of other target parts;
    C根据检测到的所述目标部位的位置判断当前对象是否为跌倒状态。C determines whether the current object is in a fallen state based on the detected position of the target part.
  2. 如权利要求1所述的方法,其特征在于,所述检测到的目标部位的位置分别为其重心位置;The method according to claim 1, characterized in that the positions of the detected target parts are respectively the positions of their centers of gravity;
    所述步骤C进一步包括以下步骤:The step C further includes the following steps:
    判断所有所述目标部位的重心位置的连线是否趋于同一直线或同一平面且该直线或平面与地面的夹角小于阈值角度,若是则判定当前对象为跌倒状态。Determine whether the line connecting the center of gravity positions of all the target parts tends to the same straight line or the same plane and the angle between the straight line or plane and the ground is less than the threshold angle. If so, it is determined that the current object is in a fallen state.
  3. 如权利要求1所述的方法,其特征在于,所述步骤B进一步包括以下步骤:The method of claim 1, wherein step B further includes the following steps:
    根据所述第一目标部位的位置和姿态判断当前对象疑似跌倒,并估计所述疑似跌倒对象的其他目标部位的位置;Determine whether the current object is suspected of falling based on the position and posture of the first target part, and estimate the positions of other target parts of the suspected falling object;
    根据估计的位置控制所述机器人移动位置和/或改变姿态以执行其他目标部位的检测,并记录检测到的其他目标部位的位置。The robot is controlled to move the position and/or change the posture according to the estimated position to perform detection of other target parts, and record the detected positions of other target parts.
  4. 如权利要求3所述的方法,其特征在于,所述步骤B之后还包括以下步骤:The method of claim 3, wherein step B further includes the following steps:
    当检测到所述其他目标部位时,连续采集多帧所述其他目标部位的数据并记录每帧所述其他目标部位的位置; When the other target part is detected, continuously collect multiple frames of data of the other target part and record the position of the other target part in each frame;
    所述步骤C进一步包括以下步骤:The step C further includes the following steps:
    根据检测到的所述其他目标部位的位置的变化判断当前对象是否为跌倒状态,若所述每帧其他目标部位的位置在预设范围内变化则判定当前对象为跌倒状态。It is determined whether the current object is in a fallen state according to the detected changes in the positions of the other target parts. If the positions of the other target parts in each frame change within a preset range, it is determined that the current object is in a fallen state.
  5. 如权利要求1所述的方法,其特征在于,所述步骤B进一步包括以下步骤:The method of claim 1, wherein step B further includes the following steps:
    根据所述第一目标部位的位置和姿态控制所述机器人环绕与所述第一目标部位相关联的对象的轮廓进行N圈位置移动,并对应执行N次所述目标部位的检测和记录每次检测到的目标部位的位置;According to the position and attitude of the first target part, the robot is controlled to perform N-circle position movements around the outline of the object associated with the first target part, and correspondingly performs N times of detection and recording of the target part each time The location of the detected target part;
    所述步骤C进一步包括以下步骤:The step C further includes the following steps:
    如果检测到的N次目标部位的位置均相同或均在预设范围内变化,则判定当前对象为跌倒状态。If the positions of the target parts detected N times are all the same or change within a preset range, the current object is determined to be in a fallen state.
  6. 如权利要求1所述的方法,其特征在于,所述执行脚底或鞋底的第一目标部位检测时,还包括以下步骤:The method according to claim 1, characterized in that when performing the detection of the first target part of the foot sole or shoe sole, it further includes the following steps:
    如果检测到脚底或鞋底,则利用所述机器人的双目视觉单元采集所述脚底或鞋底的深度信息,基于采集的深度信息确定所述脚底或鞋底的尺寸;If the sole of the foot or sole is detected, the binocular vision unit of the robot is used to collect the depth information of the sole of the foot or sole, and the size of the sole of the foot or sole is determined based on the collected depth information;
    判断所述脚底或鞋底的尺寸是否满足预设条件,若满足则判定所述脚底或鞋底为第一目标部位并记录所述第一目标部位的位置,否则继续进行位置移动和执行脚底或鞋底的第一目标部位的检测。Determine whether the size of the sole of the foot or sole meets the preset condition. If so, determine the sole of the foot or sole as the first target part and record the position of the first target part. Otherwise, continue to move the position and execute the sole of the foot or sole. Detection of the first target part.
  7. 如权利要求1所述的方法,其特征在于,所述其他目标部位包括脸部和/或腿部。The method of claim 1, wherein the other target parts include the face and/or the legs.
  8. 如权利要求1所述的方法,其特征在于,所述步骤B还包括以步骤:在执行其他目标部位的检测时,记录检测到的其他目标部位的姿态;The method of claim 1, wherein step B further includes the step of: recording the detected postures of other target parts when detecting other target parts;
    所述步骤C还包括以下步骤:根据检测到的所述目标部位的位置和姿态 判断当前对象是否为跌倒状态。The step C also includes the following steps: according to the detected position and posture of the target part Determine whether the current object is in a fallen state.
  9. 如权利要求1-8中任一项所述的方法,其特征在于,所述机器人利用目标识别模型进行所述目标部位的检测;The method according to any one of claims 1 to 8, characterized in that the robot uses a target recognition model to detect the target part;
    所述目标识别模型根据以下步骤训练得到:The target recognition model is trained according to the following steps:
    利用所述机器人采集样本图像集合,所述样本图像包括面部图像、脚部图像和与每张脚部图像关联的腿部图像,所述面部图像包含不同拍摄距离和不同面部视角姿态的图像,所述脚部图像包含不同底部朝向姿态的脚底和鞋底图像,每张图像标识有类别标签和姿态标签;The robot is used to collect a set of sample images. The sample images include facial images, foot images and leg images associated with each foot image. The facial images include images with different shooting distances and different facial angles and postures, so The above-mentioned foot images include images of soles and shoe soles with different bottom orientations, and each image is marked with a category label and posture label;
    利用所述样本图像集合训练深度神经网络,得到所述目标识别模型。Use the sample image set to train a deep neural network to obtain the target recognition model.
  10. 如权利要求1-8中任一项所述的方法,其特征在于,所述机器人是贴地移动的机器人。The method according to any one of claims 1 to 8, characterized in that the robot is a robot that moves close to the ground.
  11. 如权利要求10中任一项所述的方法,其特征在于,所述机器人是扫地机器人。The method according to any one of claims 10, wherein the robot is a sweeping robot.
  12. 一种机器人,其特征在于,包括:A robot is characterized by including:
    存储器,用于存储计算机可执行指令;以及,Memory for storing computer-executable instructions; and,
    处理器,与所述存储器耦合,用于在执行所述计算机可执行指令时实现如权利要求1至11中任意一项所述的方法中的步骤。A processor, coupled to the memory, for implementing the steps of the method of any one of claims 1 to 11 when executing the computer-executable instructions.
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如权利要求1至11中任意一项所述的方法中的步骤。 A computer-readable storage medium, characterized in that computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the computer-executable instructions implement any one of claims 1 to 11. steps in the method described above.
PCT/CN2023/093320 2022-05-10 2023-05-10 Robot and method for robot to recognise fall WO2023217193A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210508632.3A CN117064255A (en) 2022-05-10 2022-05-10 Sweeping robot and method for recognizing falling of sweeping robot
CN202210508632.3 2022-05-10

Publications (1)

Publication Number Publication Date
WO2023217193A1 true WO2023217193A1 (en) 2023-11-16

Family

ID=88712183

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/093320 WO2023217193A1 (en) 2022-05-10 2023-05-10 Robot and method for robot to recognise fall

Country Status (2)

Country Link
CN (1) CN117064255A (en)
WO (1) WO2023217193A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN110458061A (en) * 2019-07-30 2019-11-15 四川工商学院 A kind of method and company robot of identification Falls in Old People
US20190365170A1 (en) * 2016-12-21 2019-12-05 Service-Konzepte MM AG Autonomous domestic appliance and seating or reclining furniture as well as domestic appliance
CN111062283A (en) * 2019-12-06 2020-04-24 湖南集思汇智电子有限公司 Nursing method of nursing robot
WO2021006547A2 (en) * 2019-07-05 2021-01-14 Lg Electronics Inc. Moving robot and method of controlling the same
CN112287759A (en) * 2020-09-26 2021-01-29 浙江汉德瑞智能科技有限公司 Tumble detection method based on key points
CN113679302A (en) * 2021-09-16 2021-11-23 安徽淘云科技股份有限公司 Monitoring method, device, equipment and storage medium based on sweeping robot
CN215682387U (en) * 2021-06-17 2022-01-28 深圳市商汤科技有限公司 Movable image pickup device and electronic apparatus
CN114419842A (en) * 2021-12-31 2022-04-29 浙江大学台州研究院 Artificial intelligence-based falling alarm method and device for assisting user in moving to intelligent closestool

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
US20190365170A1 (en) * 2016-12-21 2019-12-05 Service-Konzepte MM AG Autonomous domestic appliance and seating or reclining furniture as well as domestic appliance
WO2021006547A2 (en) * 2019-07-05 2021-01-14 Lg Electronics Inc. Moving robot and method of controlling the same
CN110458061A (en) * 2019-07-30 2019-11-15 四川工商学院 A kind of method and company robot of identification Falls in Old People
CN111062283A (en) * 2019-12-06 2020-04-24 湖南集思汇智电子有限公司 Nursing method of nursing robot
CN112287759A (en) * 2020-09-26 2021-01-29 浙江汉德瑞智能科技有限公司 Tumble detection method based on key points
CN215682387U (en) * 2021-06-17 2022-01-28 深圳市商汤科技有限公司 Movable image pickup device and electronic apparatus
CN113679302A (en) * 2021-09-16 2021-11-23 安徽淘云科技股份有限公司 Monitoring method, device, equipment and storage medium based on sweeping robot
CN114419842A (en) * 2021-12-31 2022-04-29 浙江大学台州研究院 Artificial intelligence-based falling alarm method and device for assisting user in moving to intelligent closestool

Also Published As

Publication number Publication date
CN117064255A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
US11400600B2 (en) Mobile robot and method of controlling the same
US10717193B2 (en) Artificial intelligence moving robot and control method thereof
Mastorakis et al. Fall detection system using Kinect’s infrared sensor
US10921806B2 (en) Moving robot
KR101629649B1 (en) A robot cleaner and control method thereof
CN107194967B (en) Human body tumbling detection method and device based on Kinect depth image
CN103576660B (en) Smart Home method for supervising
US11547261B2 (en) Moving robot and control method thereof
US20190133396A1 (en) Mobile robot and mobile robot control method
KR102548936B1 (en) Artificial intelligence Moving robot and control method thereof
WO2021232933A1 (en) Safety protection method and apparatus for robot, and robot
US20140139633A1 (en) Method and System for Counting People Using Depth Sensor
CN206277403U (en) A kind of multi-functional service for infrastructure robot
Volkhardt et al. Fallen person detection for mobile robots using 3D depth data
TW201246089A (en) Method for setting dynamic environmental image borders and method for instantly determining the content of staff member activities
US9990857B2 (en) Method and system for visual pedometry
WO2023115658A1 (en) Intelligent obstacle avoidance method and apparatus
JP2019212148A (en) Information processing device and information processing program
CN115471916A (en) Smoking detection method, device, equipment and storage medium
WO2023217193A1 (en) Robot and method for robot to recognise fall
CN208514497U (en) One kind can avoidance make an inventory robot
Volkhardt et al. Multi-modal people tracking on a mobile companion robot
JP2016157170A (en) Abnormal condition notification system, abnormal condition notification program, abnormal condition notification method, and abnormal condition notification equipment
WO2019057954A1 (en) Improved localization of a mobile device based on image and radio words
CN208225261U (en) A kind of belt type human body accidentally tumble detection positioning device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23802962

Country of ref document: EP

Kind code of ref document: A1