CN112075879A - Information processing method, device and storage medium - Google Patents

Information processing method, device and storage medium Download PDF

Info

Publication number
CN112075879A
CN112075879A CN201910518047.XA CN201910518047A CN112075879A CN 112075879 A CN112075879 A CN 112075879A CN 201910518047 A CN201910518047 A CN 201910518047A CN 112075879 A CN112075879 A CN 112075879A
Authority
CN
China
Prior art keywords
obstacle
type
information
determining
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910518047.XA
Other languages
Chinese (zh)
Inventor
范泽宣
陈远
林周雄
沈大明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Group Co Ltd
Jiangsu Midea Cleaning Appliances Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Jiangsu Midea Cleaning Appliances Co Ltd filed Critical Midea Group Co Ltd
Priority to CN201910518047.XA priority Critical patent/CN112075879A/en
Priority to PCT/CN2019/111733 priority patent/WO2020248458A1/en
Publication of CN112075879A publication Critical patent/CN112075879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Abstract

The embodiment of the invention provides an information processing method, which is applied to electronic equipment and comprises the following steps: acquiring an image in the process of moving along a preset path to obtain an image acquired at each position of at least one position in the preset path; determining information related to at least one obstacle contained in the image based on the image acquired from each of the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information; establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.

Description

Information processing method, device and storage medium
Technical Field
The present invention relates to the field of intelligent robot technology, and in particular, to an information processing method, apparatus, and storage medium.
Background
With the continuous development of science and technology, the intelligent robot gradually enters the ordinary family and is accepted by more and more people. The intelligent robots can perform cleaning tasks, delivery tasks, monitoring tasks and the like, but path planning is required when the tasks are performed, namely a safe and feasible path is selected, and each intelligent robot is required to avoid collision with obstacles in a working space.
Most of the current intelligent robots realize positioning navigation through gyroscopes or laser radars. The navigation and positioning method cannot judge the specific type of the obstacle and cannot provide a correct coping method, for example, a person or an animal in a static state is considered as the obstacle, so that an error map is constructed, and the intelligent robot is not favorable for correctly executing tasks.
Disclosure of Invention
In view of the above, it is desirable to provide an information processing method, an information processing apparatus, and a storage medium, which can determine a specific type of an obstacle during a cleaning process, and thus can construct an accurate space map based on the specific type of the obstacle.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides an information processing method, which is applied to electronic equipment and comprises the following steps:
acquiring an image in the process of moving along a preset path to obtain an image acquired at each position of at least one position in the preset path;
determining information related to at least one obstacle contained in the image based on the image acquired from each of the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information;
establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
In the above solution, the determining position information of at least one obstacle included in the image based on the image acquired at each of the at least one position includes:
and determining the position information of at least one obstacle contained in the image based on at least one of the parameters of the image and the acquisition unit and the distance between the image and the obstacle acquired by each of the at least one position.
In the above solution, the determining attribute information of at least one obstacle included in the image based on the image acquired at each of the at least one position includes:
and determining attribute information of at least one obstacle contained in the image based on the image acquired from each position of the at least one position and the neural network model obtained by pre-training.
In the above aspect, the method further includes:
determining that at least one obstacle contains a first type of obstacle, and acquiring position information of the first type of obstacle;
and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
In the above aspect, the method further includes:
determining that the at least one obstacle contains a second type of obstacle, and acquiring attribute information of the second type of obstacle;
determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
In the above aspect, the method further includes:
determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the map; the third type of obstacle is an obstacle that may move.
In the above aspect, the method further includes:
receiving a work instruction;
determining a target area of work based on the received work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space.
In the above aspect, the method further includes:
storing position information of a third type of obstacle;
and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
An embodiment of the present invention provides an information processing apparatus, including: the device comprises an acquisition unit, a determination unit and a modeling unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring images in the process of moving along a preset path to obtain images acquired at each position of at least one position in the preset path;
the determining unit is used for determining related information of at least one obstacle contained in the image based on the image acquired from each position in the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information;
the modeling unit is used for establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
In the foregoing aspect, the determining unit includes: a location determining subunit;
the position determining subunit is configured to determine, based on at least one of the parameters of the image and the acquisition unit acquired by each of the at least one position and the distance to the obstacle, position information of the at least one obstacle included in the image.
In the foregoing solution, the determining unit further includes: an attribute determination subunit;
the attribute determining subunit is configured to determine attribute information of at least one obstacle included in the image, based on an image acquired at each of the at least one location and a neural network model obtained through pre-training.
In the above solution, the apparatus further comprises: dividing the cells;
the dividing unit is used for determining that the at least one obstacle contains a first type of obstacle and acquiring the position information of the first type of obstacle; and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
In the above solution, the apparatus further comprises: a region attribute determination unit;
the region attribute determining unit is used for determining that the at least one obstacle contains a second type of obstacle and acquiring attribute information of the second type of obstacle; determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
In the above scheme, the device further comprises a marking unit;
the marking module is used for determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the map; the third type of obstacle is an obstacle that may move.
In the above solution, the apparatus further comprises: a first processing unit;
the first processing unit is used for receiving a working instruction; determining a target area of work based on the work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space.
In the above solution, the apparatus further comprises: a second processing unit;
the second processing unit is used for storing the position information of the third type of obstacles; and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any of the above methods.
An embodiment of the present invention further provides an information processing apparatus, including: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform the steps of any of the above methods when executing the computer program.
The information processing method and the device provided by the embodiment of the invention continuously collect images in the moving process, and determine the position information and the attribute information of the obstacle contained in the images based on the collected images, so that the map corresponding to the space where the obstacle is located can be established based on the obtained position information and the obtained attribute information of the obstacle. Due to the fact that images of a plurality of positions are collected in the moving process, the specific types of the obstacles can be determined through analysis of the images, and the real obstacles and the pseudo obstacles which can generate movement are identified. Based on the method, a correct space map can be created through the recognition result, and guarantee is provided for subsequent navigation.
Drawings
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 2 illustrates two implementations of determining location information of an obstacle;
fig. 3 is a space map obtained by dividing the space map according to whether the obstacle is fixed;
FIG. 4 is a functional diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An information processing method provided in an embodiment of the present invention is applied to an electronic device, and fig. 1 is a flowchart of the information processing method provided in the embodiment of the present invention, as shown in fig. 1, where the method includes:
s101, acquiring an image in the process of moving along a preset path to obtain an image acquired at each position in at least one position in the preset path;
s102, determining related information of at least one obstacle contained in the image based on the image acquired from each position in the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information;
s103, establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
It should be noted that the electronic device is any device that can perform tasks intelligently in daily life, such as a sweeping robot, a delivery robot, a monitoring robot, and the like.
In order to solve the problem that the drawn room map is inaccurate due to some obstacles which temporarily exist in the room, the electronic device may be provided with an image acquisition unit for acquiring images during the movement of the electronic device.
Here, the image acquisition is performed at each of at least one position in the preset path, and specifically, the image acquisition may be performed at a part of positions in the preset path; or image acquisition can be carried out at each position in the preset path; the position and frequency of the acquisition are not limited herein.
The at least one position may be any position in the preset path, that is, at any position, or at any plurality of positions, for image acquisition.
The at least one position may also be set by a certain rule, for example, the position of the acquired image may be selected according to a preset interval value, and the interval value may be a distance interval or a time interval.
As an example, when the interval value is a distance interval, it may be set that image acquisition is performed every time the moving distance of the electronic device satisfies 20 cm. When the interval value is a time interval, the image acquisition can be performed once when the moving time of the electronic device meets 5S.
Here, the obstacle is an object that is detected by the electronic device during the task, and prevents the electronic device from moving forward, and the obstacle may be a door, a refrigerator, or the like.
The position information of the obstacle refers to the distribution position information of the obstacle in the space where the electronic device is located, and can be represented by coordinates. The attribute information of the obstacle refers to a type of the obstacle, and the type of the obstacle may be: the obstacles are beds, refrigerators, people, dogs, and the like.
Here, in order to better describe the implementation principle of the information processing method of the present invention, the following description specifically describes an example in which the electronic device is a sweeping robot.
In the embodiment of the invention, the sweeping robot can perform full-coverage sweeping when sweeping for the first time, wherein the full-coverage sweeping refers to that the sweeping robot performs sweeping according to a preset path in a space until the sweeping robot traverses all positions which can be reached in the space.
Optionally, a clearing button may be provided on the sweeping robot to clear the recorded space map, and the clearing button may be activated to clear the recorded map. After the clean-up, a new round of full coverage cleaning can begin. The arrangement is that under the condition that a lot of space maps are stored in the sweeping robot and some space maps have no effect, the existing maps can be selectively deleted in order to save the memory space of the sweeping robot. Of course, a new round of full coverage cleaning can be started while the clear button is not selected.
In practical application, in order to measure the specific position of the obstacle, the charging pile of the sweeping robot can be set as a coordinate origin, so as to establish a coordinate system. Of course, other locations may be selected as the origin of coordinates, for example, a gate, a main lying door, or the like. The setting of the origin of coordinates is not limited herein.
Here, the purpose of establishing the coordinate system is to more intuitively represent the position information of the obstacle in the space where it is located. The position information of the obstacles can be represented in other manners, for example, the position information can be represented by a relative position manner, the relative distance between each obstacle and the sweeping robot is measured, and the path navigation is realized through the relative distance.
Here, after the position information and the attribute information of the obstacle are determined from the acquired image, a map can be created based on the measured information.
Next, the determination process of the attribute information and the position information of the obstacle is described in detail:
here, the information related to obstacle determination is position information and attribute information for determining an obstacle.
Determination of position information for an obstacle:
fig. 2 shows two implementation manners for determining the position information of the obstacle, and as shown in fig. 2, in order to determine the position information of the obstacle, the position information of at least one obstacle included in the image may be determined based on at least one of a parameter of the image and the acquisition unit acquired by each of the at least one position and a distance between the image and the obstacle. Here, the position information may be coordinate information.
Specifically, the method for determining the position information of the obstacle may be performed by acquiring an image and parameters of an acquisition unit.
Here, the parameter of the acquisition unit refers to a focal length of the acquisition device, and the acquisition device at least includes a monocular camera, a binocular camera, a camera, and other tools capable of acquiring image data. In this embodiment, the acquisition device may be a camera.
After an image is acquired, identifying a target to be detected in the image, and acquiring two-dimensional position information of the target to be detected in the image; and calculating the position information of the target to be detected in the image based on the two-dimensional position information of the target to be detected in the image and the parameters of the acquisition unit. Here, the target to be detected is an obstacle included in the image, and the position information is coordinate information.
Since the actual coordinate information of the target to be detected is calculated, it is necessary to obtain 3 parameters of XYZ axes of the target to be detected, or 2 parameters of XY axes.
In the embodiment of the invention, after the two-dimensional position information of the target to be detected in the image is acquired, the coordinate information of the target to be detected can be calculated based on the camera imaging principle through the acquired two-dimensional position information of the target to be detected in the image and the parameters of the acquisition unit.
The identifying the target to be detected in the image and acquiring the two-dimensional position information of the target to be detected in the image after the image is acquired may be:
detecting the position of a target to be detected in the image based on a neural network algorithm;
and setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, wherein the central position of the 2-D frame is the central position of the target to be detected.
Here, the two-dimensional position information of the target to be detected in the image is information such as a center position of a 2-D frame of the target to be detected, a height of the 2-D frame, and the like; the parameters of the acquisition unit are the focal length of a camera on the sweeping robot, the height of the camera from the ground and other parameters.
Then, the calculating the position information of the target to be detected in the image based on the two-dimensional position information of the target to be detected in the image and the parameters of the acquisition unit includes:
and calculating the position information of the target to be detected in the image based on the central position of the 2-D frame, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the image and the height of the sweeping robot of the target to be detected in the image. Here, the calculation of the position information may be based on the camera imaging principle, and the detailed description of the calculation process is omitted here.
It should be noted that the obstacle detection of the present embodiment may be performed by a classifier based on the current detection system, for example, the method may be based on r-cnn, fast-rcnn, yolo, ssd and other neural network algorithms to identify the target to be detected in the image, and set a 2-D frame to mark the target to be detected.
In order to detect obstacles, these detection systems perform detection by operating the classifier at different positions of the image, at different scales, and at uniform intervals throughout the image. For example, the r-cnn algorithm first generates potential bounding boxes (bounding boxes) in the image, and then runs the classifier on these boxes; after classification, a process is performed to refine the bounding box, eliminate duplicate detections, and re-score the bounding box according to other objects in the scene to find the specific location of the object on the image and what the object is.
The method for determining the position information of the obstacle may also be determined by acquiring a distance between the image and the obstacle, specifically:
the target obstacle can be determined based on the acquired image, and after the distance between the robot and the target obstacle is measured, the position information or the coordinate information of the target obstacle can be determined according to the position coordinates of the sweeping robot and the measured distance between the robot and the target obstacle.
Here, the distance between the robot cleaner and the target obstacle may be measured by installing a distance measuring device on the robot cleaner, or by installing a sensor on the robot cleaner.
For a ranging device: the distance measuring device measures the time required by the electric wave to go back and forth from the sweeping robot to the obstacle so as to determine the distance between the sweeping robot and the obstacle. The distance measuring device may be a range finder, a lidar or the like.
For the inductive device: the sensor is used for detecting the barrier, and if the barrier is detected, the sweeping robot can record the coordinates of the barrier and shoot the barrier through the camera.
Here, the recording of the coordinates of the obstacle by the sensor may be calculating the size and position coordinates of the obstacle through a process of sweeping the sweeping robot around the obstacle.
Determination of attribute information for an obstacle:
in practical applications, in order to determine attribute information of an obstacle, an image may be acquired based on each of the at least one position, a neural network model obtained by pre-training is acquired, and attribute information of at least one obstacle included in the image may be determined.
Here, the training method of the neural network model includes: obtaining a plurality of sample pictures and mark data of each sample picture, wherein the sample pictures contain obstacles of the same type, and the mark data is used for marking the feature information of the obstacles in the corresponding sample picture;
and performing learning training based on the plurality of sample pictures and the labeled data of each sample picture to obtain a neural network model.
It should be noted that, in order to increase the universality of the pre-trained neural network model, the sample images may come from different application scenarios. Taking the scene of indoor cleaning as an example, the sample picture may include: door pictures, bed pictures, refrigerator pictures, television pictures, table pictures, wall pictures, and the like.
As an example, sample pictures may be classified according to the type of an obstacle, and learning and training are performed on the same type of sample pictures according to a neural network model to obtain a neural network model corresponding to the type of the obstacle; the type of the obstacle may be determined based on the attribute of the obstacle, or may be divided into a fixed obstacle, an obstacle that can be moved by a third person, an obstacle that can move by itself, and the like; the wall body structure can be further divided into a wall body structure, an object larger than a preset size and an object smaller than or equal to the preset size.
As an example of classification, the fixed obstacle includes a wall or the like, and the obstacle movable by a third person includes a bed, a refrigerator, or the like; the obstacle capable of moving by itself is a person, an animal or the like.
After the sample pictures are classified according to the types of the obstacles, learning and training are carried out on the sample pictures of the same type according to the neural network model, and the neural network model corresponding to the obstacles of the type is obtained.
As an embodiment, the neural network model may be any kind of convolutional neural network model.
Further, the method further comprises:
determining that at least one obstacle contains a first type of obstacle, and acquiring position information of the first type of obstacle;
and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
In the embodiment of the invention, the room can be divided into a plurality of sub-areas according to the positions of the obstacles, and the dividing principle can be that the obstacles are used as boundaries of the plurality of sub-areas, or that fewer obstacles exist in each sub-area as far as possible.
The first type of obstacle may be a door, and after the position information of the door is determined, the map may be divided according to the position information of the door, and the map may be divided into a plurality of sub-areas. Here, the area division principle implemented by the first type of barrier for the door is to use the barrier as a boundary of a plurality of sub-areas.
In the embodiment of the present invention, the space map may be further divided according to whether the obstacle is fixed, specifically:
the obstacles can be classified by identifying the pictures of the obstacles shot by the camera and combining the detected sizes of the obstacles, and the obstacles are divided into a wall structure, an object larger than a preset size and an object smaller than or equal to the preset size.
Here, the wall surface is a wall structure; the refrigerator, the sofa, the bed, etc. may be determined as an object having a size greater than a preset size, and the trash can, the flowerpot, etc. may be determined as an object having a size less than or equal to a preset size.
Based on the above division principle, fig. 3 is a space map obtained by dividing the space map according to whether the obstacle is fixed, and as shown in fig. 3, since the positions of the wall structure and the large furniture do not change in general, the space map can be generated according to the position coordinates of the wall structure, and the object having a size larger than a preset size, and the space map can be divided according to the position coordinates of the wall structure.
Specifically, after the positions of different walls are obtained from the shot positions, the map may be divided according to the position information of the walls, and the map may be divided into a plurality of sub-areas. Here, the setting of the barrier as a wall may be a division based on rooms, that is, by collecting images of the surrounding environment, positioning and identifying a door in a house, and dividing an originally constructed map into different rooms.
To further determine the purpose of each room, the sweeping robot continuously searches for items in the room that can determine the attributes of the room via the camera. For this purpose, the method further comprises:
determining that the at least one obstacle contains a second type of obstacle, and acquiring attribute information of the second type of obstacle;
determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
Here, as an example, the second type of obstacle may be a bed, and after the obstacle is determined to be a bed, a sub-area where the bed is located may be regarded as a bedroom for rest. The second type of obstacle may also be a toilet, and when the obstacle is determined to be a toilet, the toilet may be considered to be in a sub-area where the toilet is located. The second type of obstacle may also be a sofa, and when the obstacle is determined to be a sofa, the sub-area where the sofa is located may be considered as a living room.
Here, if an item capable of determining the room attribute is not found, the room attribute may be set manually in the subsequent step. For example, the current room is artificially set as a rest room.
In this application, another important function of the acquisition unit is to identify people, pets or objects that may be moving in the room. The objects that may be moved may be flowerpots, chairs, etc. When people and pets are still and occupy a certain position in a room, the people and the pets are judged as obstacles in most cases and influence the construction of a map. Likewise, flowerpots, chairs and the like may cause moving objects to occupy a certain position in a room when not being moved, which position is no longer an obstacle when removed. Based on this, the method further comprises:
determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the three-dimensional map; the third type of obstacle is an obstacle that may move.
Here, the specific implementation may be that whether the at least one obstacle includes a third type obstacle is determined; and when the judgment result shows that the at least one obstacle contains a third obstacle, marking the position information of the third obstacle on the map.
In practical application, when a camera on the sweeping robot identifies a static obstacle as a person or a pet, the specific position of the person or the pet on the map can be marked. The purpose of the marking is here to indicate that this position does not belong to a fixed obstacle position, in which position there may or may not be an obstacle.
Further, the marking the specific position of the person or the pet on the map may further include:
and arranging a color mark or a symbol mark on the specific position of the person or the pet on the map.
Here, the processing for the third type of obstacle may be:
and determining that the at least one obstacle contains a third type of obstacle, and clearing the position information of the third type of obstacle on the three-dimensional map.
Here, the reason for removing the position information of the third type of obstacle is to obtain a more accurate map, i.e., an updated map.
After the updated map is obtained, a route planning for cleaning can be performed based on the updated map, that is, the route planning includes: determining a starting point position and an end point position in any sub-area of the at least one sub-area; and determining a moving path based on the starting point position, the end point position and the map corresponding to the located space.
For the sweeping robot, the moving path is a sweeping path.
After the cleaning path is planned, the cleaning robot can clean according to the planned path. Since some articles, people or pets in the room may move places, the position coordinates of the third obstacle may change, and it is necessary to detect the obstacle and update the position coordinates of the obstacle at any time in the cleaning process, reconstruct a room map according to the position coordinates of the obstacle, and re-plan the cleaning path.
Thus, after reconstructing the room map, the method further comprises:
receiving a work instruction;
determining a target area of work based on the received work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space. Here, the work instruction may be a voice command or a command issued by the APP of the terminal device. The work order comprises position information of a target area; the specific content of the work order may also be included. For the sweeping robot, the specific content of the work order is sweeping.
And establishing connection with the terminal equipment, and sending the space map information and the planned cleaning path to the terminal equipment so that a user can correspondingly adjust the space map information and the planned cleaning path on the terminal equipment.
And after the user sets a cleaning position on the terminal equipment, receiving the target position sent by the terminal equipment, and cleaning according to the adjusted room map information.
In the embodiment of the invention, when cleaning is carried out according to the planned route, the position of the obstacle is automatically detected, and then the cleaning route is planned according to the updated position of the obstacle, so that the working efficiency of the cleaning robot is improved.
When an obstacle is detected during the cleaning process, whether the position coordinate point is marked with the obstacle or not can be identified from the existing space map, and if the position coordinate point is marked with the obstacle, the attribute information of the marked obstacle can be identified from the existing space map.
Here, a case where the space map needs to be updated will be described, and the space map needs to be updated mainly because of these several cases: the place originally having the obstacle becomes free of the obstacle; the obstacle appears in the place where the obstacle originally does not exist.
Thus, the rules may be set as: if no obstacle is detected at the original coordinate position of the obstacle, the position coordinate in the space map is updated to be free of obstacles, and the position which is updated to be free of obstacles and belongs to the planned cleaning path is cleaned.
If an obstacle exists at a certain currently detected position coordinate and no obstacle is displayed in the original space map, recording the position coordinate of the currently detected obstacle, and marking the currently detected obstacle coordinate as the obstacle in the room map. Thus, when the position coordinates belong to a planned cleaning route, cleaning is continued while bypassing the position coordinates.
In practical application, the sweeping robot can be set to remember the place when the sweeping robot is intentionally or unintentionally stopped by a person during sweeping, and then continue to clean the place after the person leaves after being found subsequently.
Thus, the method further comprises: storing position information of a third type of obstacle; and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
The information processing method provided by the embodiment of the invention is characterized in that the image acquisition is carried out in the process of moving along the preset path, the position coordinate and the attribute information of the obstacle in the moving process are determined based on the acquired image, after the position coordinate and the attribute information of the obstacle are determined, the space map can be divided into a plurality of sub-areas according to the position coordinate of the obstacle in the map, and an attribute is determined for each sub-area. Therefore, after the sub-areas are divided, the cleaning path can be planned for each sub-area, the position coordinates of the obstacles, the room map and the cleaning path can be cleaned and updated according to the planned cleaning path, and therefore the working efficiency of the sweeping robot can be improved.
Based on the same technical concept as the foregoing embodiment, an embodiment of the present invention further provides an information processing apparatus 400, as shown in fig. 4, where the apparatus 400 includes: an acquisition unit 401, a determination unit 402, and a modeling unit 403; wherein the content of the first and second substances,
the acquisition unit 401 is configured to perform image acquisition during a process of moving along a preset path, so as to obtain an image acquired at each of at least one position in the preset path;
the determining unit 402 is configured to determine, based on the image acquired at each of the at least one position, information related to at least one obstacle included in the image; wherein the information related to the obstacle at least comprises: position information of the obstacle, attribute information of the obstacle;
the modeling unit 403 is configured to establish a map corresponding to a space where the electronic device is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
In order to solve the problem that a drawn room map is inaccurate due to some obstacles which temporarily exist in a room, the information processing device acquires images based on the acquisition unit during movement, and determines position information and attribute information of the obstacles according to the images acquired at each position.
Optionally, the determining unit includes: a location determining subunit; the position determining subunit is configured to determine, based on at least one of the parameters of the image and the acquisition unit acquired by each of the at least one position and the distance to the obstacle, position information of the at least one obstacle included in the image.
The determination unit further includes: an attribute determination subunit; the attribute determining subunit is configured to determine attribute information of at least one obstacle included in the image, based on an image acquired at each of the at least one location and a neural network model obtained through pre-training.
Optionally, the apparatus further comprises: dividing the cells;
the dividing unit is used for determining that the at least one obstacle contains a first type of obstacle and acquiring the position information of the first type of obstacle; and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
Optionally, the apparatus further comprises: a region attribute determination unit; the region attribute determining unit is used for determining that the at least one obstacle contains a second type of obstacle and acquiring attribute information of the second type of obstacle; determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
Optionally, the apparatus further comprises a marking unit; the marking module is used for determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the map; the third type of obstacle is an obstacle that may move.
Here, the apparatus further includes: a first processing unit; the first processing unit is used for receiving a working instruction; determining a target area of work based on the work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space.
The device further comprises: a second processing unit; the second processing unit is used for storing the position information of the third type of obstacles; and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
It should be noted that, because the principle of the information processing apparatus 400 for solving the problem is similar to the aforementioned information processing method applied to the electronic device, the specific implementation process and implementation principle of the information processing apparatus 400 can be referred to the aforementioned method and implementation process, and the description of the implementation principle, and repeated parts are not repeated.
The information processing apparatus provided in this embodiment performs image acquisition during movement along a preset path, determines a position coordinate and attribute information of an obstacle during movement based on an acquired image, and after determining the position coordinate and attribute information of the obstacle, may divide a space map in which the obstacle is located into a plurality of sub-areas according to the position coordinate of the obstacle in the map, and may determine an attribute for each sub-area. Therefore, after the sub-areas are divided, the cleaning path can be planned for each sub-area, the position coordinates of the obstacles, the room map and the cleaning path can be cleaned and updated according to the planned cleaning path, and therefore the working efficiency of the sweeping robot can be improved.
Embodiments of the present invention further provide a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the method in the embodiments. And the computer program, when executed by a processor, implements the steps of performing the methods provided in the embodiments, which are not described herein again.
An embodiment of the present invention further provides an information processing apparatus, including: a processor and a memory for storing a computer program capable of running on the processor, wherein the processor is configured to execute the steps of the above-described method embodiments stored in the memory when running the computer program.
Fig. 5 is a schematic diagram of a hardware configuration of an information processing apparatus according to an embodiment of the present invention, where the information processing apparatus 500 includes: the various components of the information processing device 500, at least one processor 501, a memory 502, are coupled together by a bus system, which is understood to provide for connective communication between these components. The bus system includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as a bus system in fig. 5.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (18)

1. An information processing method applied to an electronic device, the method comprising:
acquiring an image in the process of moving along a preset path to obtain an image acquired at each position of at least one position in the preset path;
determining information related to at least one obstacle contained in the image based on the image acquired from each of the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information;
establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
2. The method of claim 1, wherein determining the position information of at least one obstacle included in the image based on the image acquired at each of the at least one position comprises:
and determining the position information of at least one obstacle contained in the image based on at least one of the parameters of the image and the acquisition unit and the distance between the image and the obstacle acquired by each of the at least one position.
3. The method of claim 1, wherein determining attribute information of at least one obstacle included in the image based on the image acquired at each of the at least one location comprises:
and determining attribute information of at least one obstacle contained in the image based on the image acquired from each position of the at least one position and the neural network model obtained by pre-training.
4. The method of claim 1, further comprising:
determining that at least one obstacle contains a first type of obstacle, and acquiring position information of the first type of obstacle;
and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
5. The method of claim 4, further comprising:
determining that the at least one obstacle contains a second type of obstacle, and acquiring attribute information of the second type of obstacle;
determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
6. The method of claim 1, further comprising:
determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the map; the third type of obstacle is an obstacle that may move.
7. The method of claim 6, further comprising:
receiving a work instruction;
determining a target area of work based on the received work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space.
8. The method of claim 7, further comprising:
storing position information of a third type of obstacle;
and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
9. An information processing apparatus, the apparatus comprising: the device comprises an acquisition unit, a determination unit and a modeling unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring images in the process of moving along a preset path to obtain images acquired at each position of at least one position in the preset path;
the determining unit is used for determining related information of at least one obstacle contained in the image based on the image acquired from each position in the at least one position; wherein the information related to the obstacle at least comprises: position information, attribute information;
the modeling unit is used for establishing a map corresponding to the space where the electronic equipment is located based on the related information of the at least one obstacle; wherein the map is capable of showing at least location information of the at least one obstacle.
10. The apparatus of claim 9, wherein the determining unit comprises: a location determining subunit;
the position determining subunit is configured to determine, based on at least one of the parameters of the image and the acquisition unit acquired by each of the at least one position and the distance to the obstacle, position information of the at least one obstacle included in the image.
11. The apparatus of claim 9, wherein the determining unit further comprises: an attribute determination subunit;
the attribute determining subunit is configured to determine attribute information of at least one obstacle included in the image, based on an image acquired at each of the at least one location and a neural network model obtained through pre-training.
12. The apparatus of claim 9, further comprising: dividing the cells;
the dividing unit is used for determining that the at least one obstacle contains a first type of obstacle and acquiring the position information of the first type of obstacle; and dividing the map into at least one sub-area according to the position information of the first type of obstacles.
13. The apparatus of claim 12, further comprising: a region attribute determination unit;
the region attribute determining unit is used for determining that the at least one obstacle contains a second type of obstacle and acquiring attribute information of the second type of obstacle; determining attribute information of a subregion where the second type of obstacle is located based on the attribute information of the second type of obstacle; the attribute information of the sub-region is used for characterizing the use of the sub-region.
14. The apparatus of claim 9, further comprising a marking unit;
the marking module is used for determining that the at least one obstacle contains a third type of obstacle, acquiring the position information of the third type of obstacle, and marking the position information of the third type of obstacle on the map; the third type of obstacle is an obstacle that may move.
15. The apparatus of claim 14, further comprising: a first processing unit;
the first processing unit is used for receiving a working instruction; determining a target area of work based on the work instruction; the target region is located within any sub-region of the at least one sub-region;
and determining a route for going to the target area based on the position information of the user, the target area and the map corresponding to the located space.
16. The apparatus of claim 15, further comprising: a second processing unit;
the second processing unit is used for storing the position information of the third type of obstacles; and detecting whether a third type of obstacle exists at the position of the position information, determining that the third type of obstacle does not exist, and moving to the position information to work.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
18. An information processing apparatus characterized by comprising: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform the steps of the method of any of claims 1 to 8 when the computer program is executed.
CN201910518047.XA 2019-06-14 2019-06-14 Information processing method, device and storage medium Pending CN112075879A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910518047.XA CN112075879A (en) 2019-06-14 2019-06-14 Information processing method, device and storage medium
PCT/CN2019/111733 WO2020248458A1 (en) 2019-06-14 2019-10-17 Information processing method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910518047.XA CN112075879A (en) 2019-06-14 2019-06-14 Information processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN112075879A true CN112075879A (en) 2020-12-15

Family

ID=73734394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910518047.XA Pending CN112075879A (en) 2019-06-14 2019-06-14 Information processing method, device and storage medium

Country Status (2)

Country Link
CN (1) CN112075879A (en)
WO (1) WO2020248458A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142931A1 (en) * 2022-01-27 2023-08-03 追觅创新科技(苏州)有限公司 Robot movement path planning method and system and cleaning robot

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197295A1 (en) * 2020-12-22 2022-06-23 Globe (jiangsu) Co., Ltd. Robotic mower, and control method thereof
CN115209032B (en) * 2021-04-09 2024-04-16 美智纵横科技有限责任公司 Image acquisition method and device based on cleaning robot, electronic equipment and medium
CN113670292B (en) * 2021-08-10 2023-10-20 追觅创新科技(苏州)有限公司 Map drawing method and device, sweeper, storage medium and electronic device
CN113565299B (en) * 2021-08-11 2022-09-16 苏州乐米凡电气科技有限公司 Automatic troweling machine with self-repairing capability and control method thereof
CN113776516A (en) * 2021-09-03 2021-12-10 上海擎朗智能科技有限公司 Method and device for adding obstacles, electronic equipment and storage medium
CN113907663B (en) * 2021-09-22 2023-06-23 追觅创新科技(苏州)有限公司 Obstacle map construction method, cleaning robot, and storage medium
CN114468857A (en) * 2022-02-17 2022-05-13 美智纵横科技有限责任公司 Control method and device of cleaning equipment, cleaning equipment and readable storage medium
CN114610820A (en) * 2021-12-31 2022-06-10 北京石头创新科技有限公司 Optimization method and device for three-dimensional map display
CN115040038A (en) * 2022-06-22 2022-09-13 杭州萤石软件有限公司 Robot control method and device and robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104825101A (en) * 2014-02-12 2015-08-12 Lg电子株式会社 Robot cleaner and controlling method thereof
CN106067191A (en) * 2016-05-25 2016-11-02 深圳市寒武纪智能科技有限公司 The method and system of semantic map set up by a kind of domestic robot
CN106863305A (en) * 2017-03-29 2017-06-20 赵博皓 A kind of sweeping robot room map creating method and device
CN109074083A (en) * 2018-06-08 2018-12-21 珊口(深圳)智能科技有限公司 Control method for movement, mobile robot and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100877072B1 (en) * 2007-06-28 2009-01-07 삼성전자주식회사 Method and apparatus of building map for a mobile robot and cleaning simultaneously
KR20110119118A (en) * 2010-04-26 2011-11-02 엘지전자 주식회사 Robot cleaner, and remote monitoring system using the same
CN107305125A (en) * 2016-04-21 2017-10-31 中国移动通信有限公司研究院 A kind of map constructing method and terminal
CN107145578B (en) * 2017-05-08 2020-04-10 深圳地平线机器人科技有限公司 Map construction method, device, equipment and system
CN107625489A (en) * 2017-08-25 2018-01-26 珠海格力电器股份有限公司 Processing method, device, processor and the sweeping robot of obstacle information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104825101A (en) * 2014-02-12 2015-08-12 Lg电子株式会社 Robot cleaner and controlling method thereof
CN106067191A (en) * 2016-05-25 2016-11-02 深圳市寒武纪智能科技有限公司 The method and system of semantic map set up by a kind of domestic robot
CN106863305A (en) * 2017-03-29 2017-06-20 赵博皓 A kind of sweeping robot room map creating method and device
CN109074083A (en) * 2018-06-08 2018-12-21 珊口(深圳)智能科技有限公司 Control method for movement, mobile robot and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142931A1 (en) * 2022-01-27 2023-08-03 追觅创新科技(苏州)有限公司 Robot movement path planning method and system and cleaning robot

Also Published As

Publication number Publication date
WO2020248458A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN112075879A (en) Information processing method, device and storage medium
CN106863305B (en) Floor sweeping robot room map creating method and device
CN110522359B (en) Cleaning robot and control method of cleaning robot
CN107981790B (en) Indoor area dividing method and sweeping robot
CN113670292B (en) Map drawing method and device, sweeper, storage medium and electronic device
KR102577785B1 (en) Cleaning robot and Method of performing task thereof
CN108759844A (en) Robot relocates and environmental map construction method, robot and storage medium
CN111657798B (en) Cleaning robot control method and device based on scene information and cleaning robot
CN109871420B (en) Map generation and partition method and device and terminal equipment
CN113116224B (en) Robot and control method thereof
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN106162144A (en) A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN109213142A (en) Autonomous device, autonomous method and storage medium
CN109528089A (en) A kind of walk on method, apparatus and the chip of stranded clean robot
CN111643017B (en) Cleaning robot control method and device based on schedule information and cleaning robot
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN115185285B (en) Automatic obstacle avoidance method, device and equipment for dust collection robot and storage medium
CN109984691A (en) A kind of sweeping robot control method
CN110315538B (en) Method and device for displaying barrier on electronic map and robot
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
WO2023115658A1 (en) Intelligent obstacle avoidance method and apparatus
CN110928282A (en) Control method and device for cleaning robot
CN113520246B (en) Mobile robot compensation cleaning method and system
CN111898557A (en) Map creation method, device, equipment and storage medium from mobile equipment
KR20230134109A (en) Cleaning robot and Method of performing task thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20210315

Address after: No.39 Caohu Avenue, Xiangcheng Economic Development Zone, Suzhou City, Jiangsu Province

Applicant after: Meizhizongheng Technology Co.,Ltd.

Address before: 39 Caohu Avenue, Xiangcheng Economic Development Zone, Suzhou, Jiangsu Province, 215144

Applicant before: JIANGSU MIDEA CLEANING APPLIANCES Co.,Ltd.

Applicant before: MIDEA GROUP Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination