CN110833357A - Obstacle identification method and device - Google Patents

Obstacle identification method and device Download PDF

Info

Publication number
CN110833357A
CN110833357A CN201810928683.5A CN201810928683A CN110833357A CN 110833357 A CN110833357 A CN 110833357A CN 201810928683 A CN201810928683 A CN 201810928683A CN 110833357 A CN110833357 A CN 110833357A
Authority
CN
China
Prior art keywords
obstacle
characteristic information
data
action
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810928683.5A
Other languages
Chinese (zh)
Inventor
文旷瑜
吴少波
易斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Gree Wuhan Electric Appliances Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Gree Wuhan Electric Appliances Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Gree Wuhan Electric Appliances Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201810928683.5A priority Critical patent/CN110833357A/en
Publication of CN110833357A publication Critical patent/CN110833357A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Abstract

The invention discloses a method and a device for identifying obstacles. Wherein, the method comprises the following steps: acquiring first characteristic information of an obstacle; inputting the first characteristic information into a classification model, and outputting the type of an obstacle corresponding to the first characteristic information by the recognition model, wherein the type of the obstacle comprises a movable obstacle, a non-movable obstacle and a dynamic obstacle, the classification model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information. The invention solves the technical problem that the obstacle identification accuracy is low because the related technology can not accurately identify the obstacle.

Description

Obstacle identification method and device
Technical Field
The invention relates to the field of obstacle identification, in particular to an obstacle identification method and device.
Background
In the operation process of the robot, it is very common that the robot meets an obstacle, and the robot in a general situation meets the obstacle and can control detour, but under the condition that the types of the obstacles are different, the robot is not always required to select detour, for example, when the sweeping robot performs sweeping work, a piece of waste paper is scanned, if the waste paper is detected to be the obstacle, the sweeping robot can detour, and actually only the paper is swept or knocked away. For another example, when the cleaning robot encounters a balloon on the floor, an obstacle is detected, and the cleaning robot bypasses the obstacle, but actually only needs to continue traveling along the original route, and the balloon is knocked away. In addition, the cleaning robot cannot predict the motion trend of the surrounding objects, for example, in the sweeping process of the robot, the surrounding objects may topple over, so that the object damages the sweeping robot, the sweeping robot needs to be maintained or even damaged, the service life is short, and the maintenance cost is high. In summary, the obstacle recognition of the robot in the related art cannot accurately recognize the type of the obstacle, and thus the obstacle recognition accuracy of the robot is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying an obstacle, which at least solve the technical problem that the obstacle identification accuracy is low because related technologies cannot accurately identify the obstacle.
According to an aspect of an embodiment of the present invention, there is provided an obstacle identification method including: acquiring first characteristic information of an obstacle; inputting the first characteristic information into a classification model, and outputting the type of an obstacle corresponding to the first characteristic information by the recognition model, wherein the type of the obstacle comprises a movable obstacle, a non-movable obstacle and a dynamic obstacle, the classification model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
Optionally, after the first feature information is input into the classification model and the identification model outputs the type of the obstacle corresponding to the first feature information, the method further includes: transmitting a motion signal according to the type of the obstacle, wherein the motion signal comprises at least one of the following: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
Optionally, avoiding the dynamic obstacle comprises: acquiring second characteristic information of the dynamic barrier; inputting the second feature information into a prediction model, and predicting the motion influence range of the dynamic obstacle by using the prediction model, wherein the prediction model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: second characteristic information and a motion influence range of the dynamic obstacle corresponding to the second characteristic information; and avoiding according to the motion influence range.
Optionally, the avoiding according to the motion influence range includes: judging whether the motion range of the machine is intersected with the motion influence range or not; and under the condition that the self motion range of the machine intersects with the motion influence range, sending an action instruction moving out of the action influence range, wherein the action instruction is used for indicating the self action of the machine.
Optionally, the obtaining the first characteristic information of the obstacle includes: acquiring image information of obstacles in the surrounding environment; first feature information of an obstacle is extracted from the image information.
According to another aspect of an embodiment of the present invention, there is provided an obstacle recognition apparatus including: the acquisition module is used for acquiring first characteristic information of the obstacle; the classification module is used for inputting the first characteristic information into a classification model, outputting the types of obstacles corresponding to the first characteristic information by the recognition model, wherein the types of the obstacles comprise movable obstacles, immovable obstacles and dynamic obstacles, the classification model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
Optionally, the apparatus further comprises: a sending module, configured to send an action signal according to a type of the obstacle, where the action signal includes at least one of: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
Optionally, the device further includes an action module, configured to perform an action of avoiding an obstacle according to the action signal, where the action module includes: the acquiring unit is used for acquiring second characteristic information of the dynamic obstacle; a prediction unit, configured to input the second feature information into a prediction model, and predict a motion influence range of the dynamic obstacle by using the prediction model, where the prediction model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: second characteristic information and a motion influence range of the dynamic obstacle corresponding to the second characteristic information; and the avoiding unit is used for avoiding according to the motion influence range.
Optionally, the avoiding unit includes: the judging subunit is used for judging whether the self motion range of the machine intersects with the motion influence range; and the action subunit is used for sending an action instruction which is moved out of the action influence range under the condition that the self movement range of the machine intersects with the movement influence range, and the action instruction is used for indicating the self action of the machine.
Optionally, the obtaining module includes: an acquisition unit configured to acquire image information of an obstacle in a surrounding environment; an extraction unit configured to extract first feature information of an obstacle from the image information.
According to another aspect of the embodiments of the present invention, there is provided a processor for executing a program, wherein the program executes to perform the method of any one of the above.
According to another aspect of the embodiment of the invention, a sweeping robot is provided, which comprises the obstacle recognition device in any one of the above.
In the embodiment of the invention, the first characteristic information of the obstacle is acquired; inputting the first characteristic information into a classification model, outputting the type of the obstacle corresponding to the first characteristic information by a recognition model, wherein the type of the obstacle comprises a movable obstacle, an immovable obstacle and a dynamic obstacle, the classification model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information achieve the purpose of accurately identifying and classifying the detected obstacle, so that the technical effect of classifying the detected obstacle is achieved, and the technical problem that the obstacle identification accuracy is low due to the fact that the obstacle cannot be accurately identified in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flow chart of a method of obstacle identification according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an obstacle recognition device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an obstacle identification method, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, acquiring first characteristic information of an obstacle;
step S104, inputting the first characteristic information into a classification model, and outputting the types of obstacles corresponding to the first characteristic information by a recognition model, wherein the types of the obstacles comprise a movable obstacle, a non-movable obstacle and a dynamic obstacle, the classification model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
Through the steps, acquiring first characteristic information of the barrier; inputting the first characteristic information into a classification model, outputting the type of the obstacle corresponding to the first characteristic information by a recognition model, wherein the type of the obstacle comprises a movable obstacle, an immovable obstacle and a dynamic obstacle, the classification model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information achieve the purpose of accurately identifying and classifying the detected obstacle, so that the technical effect of classifying the detected obstacle is achieved, and the technical problem that the obstacle identification accuracy is low due to the fact that the obstacle cannot be accurately identified in the related technology is solved.
The first characteristic information may be image information, audio information, or video information. The first feature is used to classify the obstacle, and the classification result includes a movable obstacle, a non-movable obstacle, and a dynamic obstacle. The movable barrier and the immovable barrier belong to static barriers, the division principle of the movable barrier and the immovable barrier is formulated differently according to different robots, for example, the movable barrier and the immovable barrier can be divided according to the weight of the robot and the maximum weight which can be borne by the robot, the movable barrier can be divided by the robot, and the immovable barrier can not be divided by the robot.
The dynamic barrier is a barrier in a moving state, and in many cases, the speed of the dynamic barrier is relatively fast, for example, a cup falling from a table. Therefore, the above-described static obstacle can be predicted, and prediction and prevention should be performed in advance for a static obstacle that may become a dynamic obstacle. For example, in the case that the cup is placed beside a table, it should be noted that the cup is likely to fall off the table top, and the falling area of the cup is avoided as much as possible.
The classification model is obtained by machine learning training using a plurality of sets of data, for example, a convolutional neural network recognition model, or a machine learning-enabled recognition model such as a convolutional neural network recognition model, for example, training is performed by using a plurality of sets of data until the model converges, and the classification model has a recognition capability between input data and output data. Each set of data in the plurality of sets of data includes: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
Optionally, the inputting the first feature information into the classification model, and after the identifying model outputs the type of the obstacle corresponding to the first feature information, the method further includes: sending a motion signal according to the type of the obstacle, wherein the motion signal comprises at least one of the following: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
The actions performed on different kinds of obstacles are of course different, and movable obstacles, such as paper balls, balloons and other light-weight obstacles, can be moved to other positions so that they do not affect the operation of the robot. For immovable obstacles, only detours or passes over the obstacle. For a dynamic barrier, the running track of the dynamic barrier needs to be predicted, whether the running of the robot is influenced by the dynamic barrier is judged, if the running of the robot is influenced by the running of the robot, the robot needs to be prevented in advance, and the running track of the dynamic barrier is changed, for example, flying table tennis is selected to be played back; or the running state of the robot is changed, for example, the robot can be controlled to avoid by the aid of a fallen desk and chair.
Optionally, avoiding dynamic obstacles comprises: acquiring second characteristic information of the dynamic barrier; inputting the second characteristic information into a prediction model, and predicting the motion influence range of the dynamic barrier by the prediction model, wherein the prediction model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the second characteristic information and the motion influence range of the dynamic obstacle corresponding to the second characteristic information; and avoiding according to the motion influence range.
When a dynamic obstacle is avoided, it should be noted that the moving trajectory of the dynamic obstacle needs to be predicted, and during prediction, second characteristic information of the dynamic obstacle needs to be acquired first, where the second characteristic information is used to predict a motion influence range of the dynamic obstacle, and then the dynamic obstacle is avoided according to the motion influence range.
The prediction model is obtained by machine learning training using a plurality of sets of data, for example, a convolutional neural network recognition model, or a machine learning-enabled recognition model such as a convolutional neural network recognition model, for example, training is performed by using a plurality of sets of data until the model converges, and the prediction model has a recognition capability between input data and output data. Each set of data in the plurality of sets of data includes: the second characteristic information and the motion influence range of the dynamic obstacle corresponding to the second characteristic information.
Optionally, the avoiding according to the motion influence range includes: judging whether the motion range of the machine is intersected with the motion influence range or not; and under the condition that the self motion range of the machine intersects with the motion influence range, sending an action instruction moving out of the action influence range, wherein the action instruction is used for indicating the self action of the machine.
Specifically, when the obstacle is avoided according to the motion influence range of the dynamic obstacle, whether the motion range intersects with the motion influence range is judged, if the motion range intersects with the motion influence range, it is indicated that collision is possible, and an action instruction moving to the outside of the motion influence range is sent. The machine may include various robots or a movable machine.
Optionally, the obtaining of the first feature information of the obstacle includes: acquiring image information of obstacles in the surrounding environment; first feature information of an obstacle is extracted from the image information. The first characteristic information of the obstacle may be acquired in various forms, and may be acquired from image information, or may be acquired from audio information, or may be acquired from text information.
Fig. 2 is a schematic structural diagram of an obstacle recognition device according to an embodiment of the present invention, and as shown in fig. 2, the obstacle recognition device 20 includes: an acquisition module 22 and a classification module 24, which are described in detail below with respect to the obstacle recognition device 20.
An obtaining module 22, configured to obtain first characteristic information of an obstacle; the classification module 24 is connected to the obtaining module 22, and configured to input the first feature information into a classification model, and output the type of the obstacle corresponding to the first feature information by the recognition model, where the type of the obstacle includes a movable obstacle, an immovable obstacle, and a dynamic obstacle, the classification model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
Optionally, the obstacle recognition device 20 further includes: the device comprises a sending module, a judging module and a judging module, wherein the sending module is used for sending action signals according to the types of obstacles, and the action signals comprise at least one of the following components: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
Optionally, the obstacle recognition device 20 further includes an action module, configured to perform an obstacle avoiding action according to the action signal, where the action module includes: the acquiring unit is used for acquiring second characteristic information of the dynamic obstacle; and the prediction unit is used for inputting the second characteristic information into a prediction model, and predicting the motion influence range of the dynamic obstacle by the prediction model, wherein the prediction model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the second characteristic information and the motion influence range of the dynamic obstacle corresponding to the second characteristic information; and the avoiding unit is used for avoiding according to the motion influence range.
Optionally, the hiding unit includes: the judging subunit is used for judging whether the self motion range of the machine is superposed with the motion influence range; and the action subunit is used for sending an action instruction which is moved out of the action influence range under the condition that the self movement range of the machine is overlapped with the movement influence range, and the action instruction is used for indicating the self action of the machine.
Optionally, the obtaining module further includes: an acquisition unit configured to acquire image information of an obstacle in a surrounding environment; an extraction unit configured to extract first feature information of an obstacle from the image information.
According to another aspect of the embodiments of the present invention, there is provided a processor for executing a program, wherein the program executes to perform the method of any one of the above.
According to another aspect of the embodiment of the invention, a sweeping robot is provided, which comprises an obstacle recognition device in any one of the above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. An obstacle recognition method, comprising:
acquiring first characteristic information of an obstacle;
inputting the first characteristic information into a classification model, and outputting the type of an obstacle corresponding to the first characteristic information by the recognition model, wherein the type of the obstacle comprises a movable obstacle, a non-movable obstacle and a dynamic obstacle, the classification model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
2. The method according to claim 1, wherein the inputting the first feature information into a classification model, and outputting the type of the obstacle corresponding to the first feature information by a recognition model further comprises:
transmitting a motion signal according to the type of the obstacle, wherein the motion signal comprises at least one of the following: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
3. The method of claim 2, wherein evading the dynamic obstacle comprises:
acquiring second characteristic information of the dynamic barrier;
inputting the second feature information into a prediction model, and predicting the motion influence range of the dynamic obstacle by using the prediction model, wherein the prediction model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: second characteristic information and a motion influence range of the dynamic obstacle corresponding to the second characteristic information;
and avoiding according to the motion influence range.
4. The method of claim 3, wherein evading based on the range of motion influence comprises:
judging whether the motion range of the machine is intersected with the motion influence range or not;
and under the condition that the self motion range of the machine intersects with the motion influence range, sending an action instruction moving out of the action influence range, wherein the action instruction is used for indicating the self action of the machine.
5. The method of claim 1, wherein obtaining first characteristic information of an obstacle comprises:
acquiring image information of obstacles in the surrounding environment;
first feature information of an obstacle is extracted from the image information.
6. An obstacle recognition device, comprising:
the acquisition module is used for acquiring first characteristic information of the obstacle;
the classification module is used for inputting the first characteristic information into a classification model, outputting the types of obstacles corresponding to the first characteristic information by the recognition model, wherein the types of the obstacles comprise movable obstacles, immovable obstacles and dynamic obstacles, the classification model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the first characteristic information and the type of the obstacle corresponding to the first characteristic information.
7. The apparatus of claim 6, further comprising:
a sending module, configured to send an action signal according to a type of the obstacle, where the action signal includes at least one of: and moving the movable barrier to bypass the immovable barrier and avoid the dynamic barrier.
8. The device of claim 7, further comprising an action module for performing an obstacle avoidance action based on the action signal, the action module comprising:
the acquiring unit is used for acquiring second characteristic information of the dynamic obstacle;
a prediction unit, configured to input the second feature information into a prediction model, and predict a motion influence range of the dynamic obstacle by using the prediction model, where the prediction model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: second characteristic information and a motion influence range of the dynamic obstacle corresponding to the second characteristic information;
and the avoiding unit is used for avoiding according to the motion influence range.
9. The apparatus of claim 8, wherein the evasive unit comprises:
the judging subunit is used for judging whether the self motion range of the machine intersects with the motion influence range;
and the action subunit is used for sending an action instruction which is moved out of the action influence range under the condition that the self movement range of the machine intersects with the movement influence range, and the action instruction is used for indicating the self action of the machine.
10. The apparatus of claim 6, wherein the obtaining module further comprises:
an acquisition unit configured to acquire image information of an obstacle in a surrounding environment;
an extraction unit configured to extract first feature information of an obstacle from the image information.
11. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 5.
12. A sweeping robot comprising an obstacle recognition device according to any one of claims 6 to 10.
CN201810928683.5A 2018-08-15 2018-08-15 Obstacle identification method and device Pending CN110833357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810928683.5A CN110833357A (en) 2018-08-15 2018-08-15 Obstacle identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810928683.5A CN110833357A (en) 2018-08-15 2018-08-15 Obstacle identification method and device

Publications (1)

Publication Number Publication Date
CN110833357A true CN110833357A (en) 2020-02-25

Family

ID=69573111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810928683.5A Pending CN110833357A (en) 2018-08-15 2018-08-15 Obstacle identification method and device

Country Status (1)

Country Link
CN (1) CN110833357A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112155486A (en) * 2020-09-30 2021-01-01 王丽敏 Control method and control device of sweeping robot
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
WO2023116914A1 (en) * 2021-12-23 2023-06-29 苏州宝时得电动工具有限公司 Self-moving robot and obstacle handling method therefor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586322A (en) * 2013-10-31 2015-05-06 Lg电子株式会社 Moving robot and operating method
CN105511478A (en) * 2016-02-23 2016-04-20 百度在线网络技术(北京)有限公司 Robot cleaner, control method applied to same and terminal
CN106821157A (en) * 2017-04-14 2017-06-13 小狗电器互联网科技(北京)股份有限公司 The cleaning method that a kind of sweeping robot is swept the floor
CN107730552A (en) * 2017-09-27 2018-02-23 上海与德通讯技术有限公司 A kind of exchange method, device, sweeping robot and medium
CN108170137A (en) * 2017-12-15 2018-06-15 珊口(上海)智能科技有限公司 Mobile robot and its control method and control system
WO2018124682A2 (en) * 2016-12-26 2018-07-05 엘지전자 주식회사 Mobile robot and control method therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586322A (en) * 2013-10-31 2015-05-06 Lg电子株式会社 Moving robot and operating method
CN105511478A (en) * 2016-02-23 2016-04-20 百度在线网络技术(北京)有限公司 Robot cleaner, control method applied to same and terminal
WO2018124682A2 (en) * 2016-12-26 2018-07-05 엘지전자 주식회사 Mobile robot and control method therefor
CN106821157A (en) * 2017-04-14 2017-06-13 小狗电器互联网科技(北京)股份有限公司 The cleaning method that a kind of sweeping robot is swept the floor
CN107730552A (en) * 2017-09-27 2018-02-23 上海与德通讯技术有限公司 A kind of exchange method, device, sweeping robot and medium
CN108170137A (en) * 2017-12-15 2018-06-15 珊口(上海)智能科技有限公司 Mobile robot and its control method and control system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112155486A (en) * 2020-09-30 2021-01-01 王丽敏 Control method and control device of sweeping robot
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
WO2023116914A1 (en) * 2021-12-23 2023-06-29 苏州宝时得电动工具有限公司 Self-moving robot and obstacle handling method therefor

Similar Documents

Publication Publication Date Title
CN110833357A (en) Obstacle identification method and device
JP6905081B2 (en) Methods and Devices for Obtaining Vehicle Loss Assessment Images and Devices, Servers, and Terminal Devices
CN108344414A (en) A kind of map structuring, air navigation aid and device, system
US11113526B2 (en) Training methods for deep networks
JP2017084320A (en) Learning method and program
CN102822770B (en) Associated with
CN111399492A (en) Robot and obstacle sensing method and device thereof
JP6412998B1 (en) Moving object tracking device, moving object tracking method, moving object tracking program
CN102201062A (en) Information processing apparatus, method and program
EP2982132B1 (en) Video processing system and method
US10282634B2 (en) Image processing method, image processing apparatus, and recording medium for reducing variation in quality of training data items
CN110895409B (en) Control method for avoiding barrier
CN110781844A (en) Security patrol monitoring method and device
CN110414360A (en) A kind of detection method and detection device of abnormal behaviour
CN112507760A (en) Method, device and equipment for detecting violent sorting behavior
US20200364517A1 (en) Information processing device, information processing method, and recording medium
WO2019152177A3 (en) System and method for neuromorphic visual activity classification based on foveated detection and contextual filtering
JP6623851B2 (en) Learning method, information processing device and learning program
CN106023990A (en) Speech control method and device based on projector equipment
JP6476678B2 (en) Information processing apparatus and information processing program
CN104899544A (en) Image processing device and image processing method
US20200394939A1 (en) Electronic label management apparatus and method
CN113807407B (en) Target detection model training method, model performance detection method and device
CN113085861A (en) Control method and device for automatic driving vehicle and automatic driving vehicle
CN105160333A (en) Vehicle model identifying method and vehicle model identifying device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination