CN112506182B - Floor sweeping robot positioning method and device, computer equipment and storage medium - Google Patents

Floor sweeping robot positioning method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112506182B
CN112506182B CN202011184831.0A CN202011184831A CN112506182B CN 112506182 B CN112506182 B CN 112506182B CN 202011184831 A CN202011184831 A CN 202011184831A CN 112506182 B CN112506182 B CN 112506182B
Authority
CN
China
Prior art keywords
obstacle
information
dimensional model
model data
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011184831.0A
Other languages
Chinese (zh)
Other versions
CN112506182A (en
Inventor
尤勇敏
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Original Assignee
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuling Jiangsu Digital Intelligent Technology Co Ltd filed Critical Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority to CN202011184831.0A priority Critical patent/CN112506182B/en
Publication of CN112506182A publication Critical patent/CN112506182A/en
Application granted granted Critical
Publication of CN112506182B publication Critical patent/CN112506182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Abstract

The application relates to the technical field of intelligent home, in particular to a floor sweeping robot positioning method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object; acquiring three-dimensional model data of an area to be cleaned, and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data; determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data; and determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object. By adopting the method, the positioning accuracy of the sweeping robot can be improved.

Description

Floor sweeping robot positioning method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of intelligent home, in particular to a floor sweeping robot positioning method and device, computer equipment and a storage medium.
Background
With the rapid development of economy, the market of sweeping robots has experienced ultra-high-speed growth. Meanwhile, the sweeping robot becomes more intelligent, and the sweeping robot gradually becomes intelligent sweeping by means of vision and laser mapping (slam) technology from the beginning of random sweeping. Before the sweeping robot is started to carry out sweeping work, the sweeping robot needs to be positioned so as to determine the position state of the sweeping robot, and therefore accurate sweeping and obstacle avoidance can be conveniently carried out.
However, the positioning method of the sweeping robot in the conventional technology generally has the defect of low positioning accuracy, and how to improve the positioning accuracy of the sweeping robot becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, it is necessary to provide a moving object positioning method, a moving object positioning device, a computer device, and a storage medium, which can improve the positioning accuracy of the sweeping robot.
A method of positioning a sweeping robot, the method comprising:
acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object;
acquiring three-dimensional model data of an area to be cleaned, and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data;
determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data;
and determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
In one embodiment, acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object includes:
extracting the features of the live-action image to obtain obstacle information of an entity obstacle object in the live-action image;
the detection signal is transmitted to the entity obstacle through the detection signal transmitting and receiving device, and the distance information between the sweeping robot and the entity obstacle is determined according to the detection signal.
In one embodiment, the performing feature extraction on the live-action image to obtain feature information of the entity obstacle object in the live-action image includes:
and performing feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of the entity obstacle object in the live-action image.
In one embodiment, the method for determining the distance between the sweeping robot and the physical obstacle by transmitting the detection signal to the physical obstacle through the detection signal transceiver comprises the following steps:
transmitting a detection signal to the solid obstacle through the detection signal transmitting and receiving device, and receiving a reflection signal of the detection signal reflected by the solid obstacle;
calculating a time difference according to the emission time of the detection signal and the receiving time of the reflection signal;
and acquiring the propagation speed of the detection signal, and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
In one embodiment, determining a target virtual obstacle object corresponding to a solid obstacle object in three-dimensional model data based on the obstacle information and object information in the three-dimensional model data includes:
judging whether object information corresponding to the obstacle information exists in the three-dimensional model data or not according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data;
updating the three-dimensional model data based on the obstacle information when the object information corresponding to the obstacle information does not exist in the three-dimensional model data;
and when the object information corresponding to the obstacle information exists in the virtual obstacle object, determining that the virtual obstacle object determined by the object information is a target virtual obstacle object corresponding to the entity obstacle object.
In one embodiment, updating the three-dimensional model data based on the obstacle information includes:
the obstacle information is used for triggering the cloud server to construct a virtual obstacle corresponding to the obstacle information according to the obstacle information and generate updating data;
and receiving the update data fed back by the cloud server, and updating the three-dimensional model data based on the update data.
In one embodiment, after determining the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, the method further includes:
setting an object label of each virtual obstacle object;
generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects;
and generating a control instruction to control the sweeping robot to execute according to the sweeping route.
A sweeping robot positioning device, the device comprising:
the collection and extraction module is used for collecting a live-action image of the area to be cleaned and extracting the obstacle information of the solid obstacle object in the live-action image and the distance information of the sweeping robot from the solid obstacle object;
the object information acquisition module is used for acquiring three-dimensional model data of the area to be cleaned and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data;
the target virtual obstacle object determining module is used for determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data;
and the position information determining module is used for determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the above embodiments when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the embodiments described above.
The sweeping robot positioning method, the sweeping robot positioning device, the computer equipment and the storage medium acquire the live-action image of the to-be-swept area, extract the obstacle information of the solid obstacle object in the live-action image and the distance information of the sweeping robot from the solid obstacle object, then acquire the three-dimensional model data of the to-be-swept area, determine the object information of each virtual obstacle object in the to-be-swept area based on the three-dimensional model data, further determine the target virtual obstacle object corresponding to the solid obstacle object in the three-dimensional model data based on the obstacle information and each object information in the three-dimensional model data, and determine the position information of the sweeping robot in the to-be-swept area according to the determined target virtual obstacle object and the distance information of the sweeping robot from the solid obstacle object. Therefore, the position information between the robot and the obstacle object can be determined according to the collected live-action image, the target virtual obstacle object corresponding to the entity obstacle object is determined according to the three-dimensional model data, the sweeping robot is accurately positioned according to the determined position information and the target virtual obstacle object, and the positioning accuracy of the sweeping robot can be improved.
Drawings
Fig. 1 is an application scenario diagram of a positioning method of a sweeping robot in an embodiment;
fig. 2 is a schematic flow chart illustrating a positioning method of the sweeping robot according to an embodiment;
FIG. 3 is a schematic diagram of the relationship between entity space and digital twin space in one embodiment;
fig. 4 is a flow chart illustrating a method for determining distance information between a sweeping robot and a physical obstacle according to an embodiment;
FIG. 5 is a schematic flow chart diagram of a method for determining a target virtual obstacle in one embodiment;
fig. 6 is a block diagram of a positioning device of a sweeping robot in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The positioning method of the sweeping robot provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may collect live-action images of the area to be cleaned and send them to the server 104 via the network. After the server 104 acquires the live-action image, the obstacle information of the entity obstacle object in the live-action image and the distance information of the sweeping robot from the entity obstacle object can be extracted. Then, the server 104 obtains three-dimensional model data of the area to be cleaned, determines object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, and then determines a target virtual obstacle object corresponding to the physical obstacle object in the three-dimensional model data based on the obstacle information and each object information in the three-dimensional model data. Further, according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object, the position information of the sweeping robot in the area to be swept is determined. The terminal 102 may be, but not limited to, various image capturing devices, such as a camera, a video camera, and the like, or may also be various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices having an image capturing function, the terminal 102 may be installed on the top of the cleaning robot, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for positioning a cleaning robot is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S202, collecting a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of the sweeping robot from the entity obstacle object.
The area to be cleaned refers to an area to be cleaned by the sweeping robot, and may be, for example, a whole set of room or a building. The live-action image of the area to be cleaned refers to an image of the actual area to be cleaned acquired by the acquisition equipment.
In this embodiment, an image capturing device may be installed on the top of the sweeping robot, so as to capture a live-action image of the environment of the sweeping robot through the image capturing device.
In this embodiment, after acquiring the live-action image, the server may extract obstacle information of the captured entity obstacle object from the live-action image, for example, point cloud data of the entity obstacle object, which may include, but is not limited to, size information, position information, object tag, object name, and the like of the entity obstacle object.
Further, the server may also obtain information on a distance between the sweeping robot and the physical obstacle, for example, a relative coordinate position between the sweeping robot and the physical obstacle.
Step S204, three-dimensional model data of the area to be cleaned is obtained, and object information of each virtual obstacle in the area to be cleaned is determined based on the three-dimensional model data.
The three-dimensional model data refers to model data generated by Building Information Modeling (BIM) technologies. Referring to fig. 3, the three-dimensional model data is a digital twin model space completely consistent with the physical space, the three-dimensional model data may include virtual objects corresponding to each physical object in the actual area to be cleaned, that is, homes (virtual obstacles) corresponding to homes (obstacles) in the physical space, the three-dimensional model data may further include data such as names, materials, position information, and related dimension parameters of each virtual object, for example, a wall, various home appliances, and the like, and may further include an a device model corresponding to a device for cleaning a room, that is, a sweeping robot.
In this embodiment, the server may pre-construct a three-dimensional model based on the live-action data of the area to be cleaned, store the three-dimensional model in the server database, acquire the three-dimensional model data from the database based on the operation instruction, and perform subsequent processing.
In this embodiment, after acquiring the three-dimensional model data, the server may determine object information of each virtual obstacle object included in the three-dimensional model data, for example, an object label of each virtual obstacle object, such as a wall, a table, a chair, a bed, a cabinet, a tea table, a sofa, a kettle, a children toy, or the like, and may further include size information and position information of each virtual obstacle object.
Optionally, after the server acquires the three-dimensional model Data, the server may further convert the three-dimensional model Data into an NDT (Niton Data Transfer File) File required by a localization and mapping (slam) in 2D, and then process the NDT File, which is not limited in this application.
Step S206, based on the obstacle information and the object information in the three-dimensional model data, determining a target virtual obstacle corresponding to the solid obstacle in the three-dimensional model data.
Specifically, the server may query the three-dimensional model data according to the obstacle information to determine a target virtual obstacle object corresponding to the physical obstacle object among a plurality of virtual obstacle objects of the three-dimensional model data.
For example, the server may search for a virtual obstacle object corresponding to the physical obstacle object according to the object tag, and then determine whether the searched virtual obstacle object is consistent with the physical obstacle object by comparing the size information, thereby determining whether the searched virtual obstacle object is a target virtual obstacle object corresponding to the physical obstacle object.
And step S208, determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
In this embodiment, when determining the target virtual obstacle object, the server may determine the position information of the sweeping robot in the area to be swept according to the position information of the target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
In this embodiment, the server may determine a plurality of physical obstacle objects, and then the server may determine each target virtual obstacle object corresponding to each of the plurality of physical obstacle objects, and determine the position information of each target virtual obstacle object.
Further, the server may determine distance information between the sweeping robot and each of the physical obstacle objects, and then perform comprehensive calculation according to the distance information between the sweeping robot and each of the physical obstacle objects and the position information of each of the target virtual obstacle objects to determine the position information of the sweeping robot. For example, the position information of the sweeping robot is determined according to the distance information between the sweeping robot and one physical obstacle and the position information of the corresponding target virtual obstacle, and then the obtained position information of the sweeping robot is verified according to the distance information between the remaining physical obstacle and the sweeping robot and the position information of the corresponding target virtual obstacle, so that the positioning accuracy of the sweeping robot is improved.
The method for positioning the sweeping robot comprises the steps of collecting a live-action image of an area to be swept, extracting obstacle information of an entity obstacle object in the live-action image and distance information of the sweeping robot from the entity obstacle object, then obtaining three-dimensional model data of the area to be swept, determining object information of each virtual obstacle object in the area to be swept based on the three-dimensional model data, further determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data, and determining position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object. Therefore, the position information between the robot and the obstacle object can be determined according to the collected live-action image, the target virtual obstacle object corresponding to the entity obstacle object is determined according to the three-dimensional model data, the sweeping robot is accurately positioned according to the determined position information and the target virtual obstacle object, and the positioning accuracy of the sweeping robot can be improved.
In one embodiment, acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of the sweeping robot from the entity obstacle object may include: extracting the features of the live-action image to obtain obstacle information of an entity obstacle object in the live-action image; the detection signal is transmitted to the entity obstacle through the detection signal transmitting and receiving device, and the distance information between the sweeping robot and the entity obstacle is determined according to the detection signal.
The detection signal transceiver can be arranged at the top of the sweeping robot and used for transmitting detection signals and receiving reflection signals reflected by the solid obstacle.
Specifically, the server may obtain obstacle information of the physical obstacle object in the live-action image by performing feature extraction on the live-action image, for example, extracting size information, color parameters, and the like of the physical obstacle object in the live-action image.
In this embodiment, the feature extraction performed on the live-action image by the server may be performed by various image recognition technologies, which is not limited by this embodiment.
In this embodiment, the probe signal transceiver may be a laser range finder, a device with DToF (direct ToF) technology, or the like, which is not limited thereto.
In this embodiment, the server may determine the distance information between the sweeping robot and the physical obstacle according to the detection signal transmitted to the physical obstacle by the detection signal transceiver and the received reflection signal reflected by the physical obstacle.
In the above embodiment, the detection signal is transmitted to the entity obstacle object through the detection signal transceiver, and the distance information between the sweeping robot and the entity obstacle object is determined according to the detection signal, so that the relative position between the sweeping robot and the entity obstacle object can be accurately determined, and the accuracy of positioning the follow-up sweeping robot is improved.
In one embodiment, the performing feature extraction on the live-action image to obtain feature information of the entity obstacle object in the live-action image may include: and performing feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of the entity obstacle object in the live-action image.
In this embodiment, the server may pre-construct an initial neural network model, perform iterative training and testing on the constructed initial neural network model through the acquired training set data, and perform feature extraction on the live-action image after the test is passed.
Specifically, the server performs feature extraction on the live-action image through the pre-trained neural network model, which may be continuous multi-scale feature extraction on the live-action image through the neural network model to obtain initial features corresponding to each scale. And then the server performs feature fusion on the initial features of each scale by adopting a mode of feature fusion of two adjacent layers to obtain fusion features of a plurality of scales. Furthermore, regression prediction is carried out on each fusion characteristic to obtain characteristic information of the entity obstacle object in the live-action image.
In the above embodiment, the feature extraction is performed on the live-action image through the pre-trained neural network model, so that the accuracy and the extraction speed of the feature extraction of the live-action image can be improved, and the data processing efficiency can be further improved.
In one embodiment, referring to fig. 4, the transmitting a detection signal to the physical obstacle through the detection signal transceiver and determining the distance information between the sweeping robot and the physical obstacle according to the detection signal may include:
step S402, the detection signal is transmitted to the entity obstacle through the detection signal transceiver, and the reflected signal of the detection signal reflected by the entity obstacle is received.
Specifically, after the server transmits the detection signal to the physical obstacle through the detection signal transceiver, the server may record the transmission time of the detection signal, and record the corresponding reception time when receiving the reflection signal of the detection signal reflected by the physical obstacle.
Step S404, calculating the time difference according to the emission time of the detection signal and the receiving time of the reflection signal.
In this embodiment, the server calculates a time difference, i.e., a time difference between transmission of the probe signal and reception of the reflected signal, based on the reception time and the transmission time.
Step S406, acquiring the propagation speed of the detection signal, and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
Specifically, the server can acquire the propagation speed of the detection signal in the air, and determine the distance from the sweeping robot to the physical obstacle according to the propagation speed of the detection signal in the air and the time difference. Specifically, the calculation can be performed by the following formula (1).
Figure BDA0002751124320000091
Wherein d is the distance between the sweeping robot and the entity obstacle, c is the propagation speed of the detection signal in the air, and delta t is the time difference.
In the above embodiment, the distance information of the sweeping robot from the entity obstacle object is determined by calculating the transceiving time difference of the detection signal and calculating the propagation speed of the obtained detection signal, so that the distance between the sweeping robot and the entity obstacle object can be accurately determined based on the propagation attribute of the detection signal in the air, and the positioning accuracy of the sweeping robot is improved.
In one embodiment, referring to fig. 5, determining a target virtual obstacle object corresponding to the solid obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data may include:
step S502, according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data, judging whether the three-dimensional model data has object information corresponding to the obstacle information.
As described above, the server may search for the object information of each virtual obstacle object in the three-dimensional model data according to the object tag to determine whether there is object information corresponding to the obstacle information in the three-dimensional model data, and then determine whether the searched virtual obstacle object is consistent with the physical obstacle object by comparing the size information, the color information, and the like.
In this embodiment, when the server finds the corresponding virtual obstacle object from the object information of each virtual obstacle object in the three-dimensional model data based on the obstacle information and determines that the virtual obstacle object is consistent by comparing the size information, the color information, and the like, it is determined that the object information corresponding to the obstacle information exists in the three-dimensional model data, and otherwise, the object information does not exist.
In step S504, when there is no object information corresponding to the obstacle information in the three-dimensional model data, the three-dimensional model data is updated based on the obstacle information.
Specifically, when there is no object information corresponding to the obstacle information in the three-dimensional model data, the server may construct corresponding obstacle model data according to the obstacle information, such as length, width, height, color, material, and the like, and update the three-dimensional model data.
Step S506, when there is object information corresponding to the obstacle information in the virtual obstacle object, determining that the virtual obstacle object determined by the object information is a target virtual obstacle object corresponding to the physical obstacle object.
Specifically, when the server determines that object information corresponding to the obstacle information exists in the virtual obstacle object, the server determines that the corresponding virtual obstacle object is a target virtual obstacle object corresponding to the physical obstacle object, and performs subsequent processing.
In the above embodiment, whether the three-dimensional model data includes the object information corresponding to the obstacle information is determined according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data, and when the three-dimensional model data does not include the object information corresponding to the obstacle information, the three-dimensional model data is updated based on the obstacle information, so that the three-dimensional model data can be updated continuously to improve the accuracy of the three-dimensional model data.
In one embodiment, updating the three-dimensional model data based on the obstacle information may include: the obstacle information is used for triggering the cloud server to construct a virtual obstacle corresponding to the obstacle information according to the obstacle information and generate updating data; and receiving the update data fed back by the cloud server, and updating the three-dimensional model data based on the update data.
Specifically, the server can upload the obstacle information corresponding to the obstacle to the cloud server, and the cloud server constructs the corresponding virtual obstacle.
In this embodiment, the cloud server may construct corresponding obstacle model data according to the length, width, height, color, material, and the like of the obstacle included in the obstacle information when acquiring the obstacle information, and then store the three-dimensional model data indicating the cloud end according to the corresponding position information.
Furthermore, the cloud server can generate updating data based on the constructed obstacle model data and send the updating data to the server, so that the server can update the three-dimensional model data of the server based on the updating data.
In this embodiment, the update data may only include data related to obstacle model data, or may also include three-dimensional model data of the entire area to be cleaned, which is not limited in this application.
In the above embodiment, by sending the obstacle information to the cloud server, the obstacle information is used to trigger the cloud server to construct the virtual obstacle object corresponding to the obstacle information according to the obstacle information, generate the update data, and then update the three-dimensional model data based on the update data, so that the data amount required to be processed by the server can be reduced, the data processing pressure calculated by the server is reduced, and the data processing efficiency is improved.
In one embodiment, after determining the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, the method may further include: setting an object label of each virtual obstacle object; generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects; and generating a control instruction to control the sweeping robot to execute according to the sweeping route.
The cleaning route refers to a cleaning route generated by the server based on the three-dimensional model data of the area to be cleaned.
In this embodiment, the server may set an object tag of each virtual obstacle object, for example, an object that cannot be collided such as a thermos bottle or an electric fan, the server may set a collision prohibition tag in the model, and the server may set a collision permission tag in the model in the case of an object such as a child's toy or a curtain.
Further, the server may generate a cleaning route for cleaning the area to be cleaned in a simulation manner according to the set object labels and the virtual obstacle objects in the three-dimensional model data.
Further, the server can generate a corresponding cleaning instruction according to the cleaning route, and control the cleaning robot to execute the cleaning instruction. For example, according to a preset sweeping route, the sweeping robot is controlled to carry out rotary sweeping, edge-attaching walking, suction force increasing, cleaning speed reducing and the like.
In the above embodiment, by setting the object tags of the virtual obstacle objects, the cleaning route for cleaning the area to be cleaned is generated according to the set tags and the virtual obstacle objects, and the control instruction is generated to control the cleaning robot to execute according to the cleaning route, so that for some objects which can collide, the cleaning robot can be controlled to perform edge-extending cleaning after colliding, the cleaning capability can be improved, and the cleaning is avoided compared with the objects which cannot collide, and the damage to the cleaning robot and the objects caused by collision is avoided.
It should be understood that although the steps in the flowcharts of fig. 2, 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 4, and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a sweeping robot positioning device, including: an acquisition and extraction module 100, an object information acquisition module 200, a target virtual obstacle determination module 300, and a position information determination module 400, wherein:
the collection and extraction module 100 is configured to collect a live-action image of an area to be cleaned, and extract obstacle information of an entity obstacle object in the live-action image and distance information of the sweeping robot from the entity obstacle object.
The object information acquiring module 200 is configured to acquire three-dimensional model data of an area to be cleaned, and determine object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data.
A target virtual obstacle object determining module 300, configured to determine a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data, based on the obstacle information and each object information in the three-dimensional model data.
The position information determining module 400 is configured to determine, according to the determined target virtual obstacle object and distance information between the sweeping robot and the entity obstacle object, position information of the sweeping robot in the area to be swept.
In one embodiment, the collection and extraction module 100 may include:
and the feature extraction sub-module is used for extracting features of the live-action image to obtain obstacle information of the entity obstacle object in the live-action image.
And the distance information acquisition submodule is used for transmitting a detection signal to the entity obstacle through the detection signal receiving and transmitting device and determining the distance information between the sweeping robot and the entity obstacle according to the detection signal.
In one embodiment, the feature extraction sub-module is configured to perform feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of an entity obstacle object in the live-action image.
In one embodiment, the distance information obtaining sub-module may include:
and the signal sending and receiving unit is used for sending the detection signal to the entity obstacle through the detection signal receiving and sending device and receiving the reflection signal of the detection signal reflected by the entity obstacle.
And the time difference determining unit is used for calculating the time difference according to the emission time of the detection signal and the receiving time of the reflection signal.
And the distance information determining unit is used for acquiring the propagation speed of the detection signal and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
In one embodiment, the target virtual obstacle object determination module 300 may include:
and the judging submodule is used for judging whether object information corresponding to the obstacle information exists in the three-dimensional model data or not according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data.
And the updating submodule is used for updating the three-dimensional model data based on the obstacle information when the object information corresponding to the obstacle information does not exist in the three-dimensional model data.
And the target virtual obstacle object determining submodule is used for determining that the virtual obstacle object determined by the object information is the target virtual obstacle object corresponding to the entity obstacle object when the object information corresponding to the obstacle information exists in the virtual obstacle object.
In one embodiment, the update submodule may include:
the transmitting unit is used for transmitting the obstacle information to the cloud server, the obstacle information is used for triggering the cloud server to construct a virtual obstacle object corresponding to the obstacle information according to the obstacle information, and the updating data is generated.
And the receiving unit is used for receiving the updating data fed back by the cloud server and updating the three-dimensional model data based on the updating data.
In one embodiment, the apparatus may further include:
and a label setting module, configured to set an object label of each virtual obstacle object after the object information obtaining module 200 determines the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data.
And the cleaning route generating module is used for generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects.
And the control execution module is used for generating a control instruction to control the sweeping robot to execute according to the sweeping route.
For specific limitations of the positioning device of the sweeping robot, reference may be made to the above limitations of the positioning method of the sweeping robot, and details are not described herein again. All or part of each module in the sweeping robot positioning device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for storing data such as live-action images, distance information, object information, position information and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for positioning a sweeping robot.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object; acquiring three-dimensional model data of an area to be cleaned, and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data; determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data; and determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
In one embodiment, the acquiring of the live-action image of the to-be-cleaned area and extracting the obstacle information of the physical obstacle object in the live-action image and the distance information of the sweeping robot from the physical obstacle object when the processor executes the computer program may include: extracting the features of the live-action image to obtain obstacle information of an entity obstacle object in the live-action image; the detection signal is transmitted to the entity obstacle through the detection signal transmitting and receiving device, and the distance information between the sweeping robot and the entity obstacle is determined according to the detection signal.
In one embodiment, the performing, by the processor, the feature extraction on the live-action image when the computer program is executed to obtain the feature information of the entity obstacle object in the live-action image may include: and performing feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of the entity obstacle object in the live-action image.
In one embodiment, the processor, when executing the computer program, implements transmitting the detection signal to the physical obstacle through the detection signal transceiver, and determining the distance information between the sweeping robot and the physical obstacle according to the detection signal, which may include: transmitting a detection signal to the solid obstacle through the detection signal transmitting and receiving device, and receiving a reflection signal of the detection signal reflected by the solid obstacle; calculating a time difference according to the emission time of the detection signal and the receiving time of the reflection signal; and acquiring the propagation speed of the detection signal, and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
In one embodiment, the processor, when executing the computer program, determines a target virtual obstacle object corresponding to the physical obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data, and may include: judging whether object information corresponding to the obstacle information exists in the three-dimensional model data or not according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data; when the three-dimensional model data does not have object information corresponding to the obstacle information, updating the three-dimensional model data based on the obstacle information; and when the object information corresponding to the obstacle information exists in the virtual obstacle object, determining that the virtual obstacle object determined by the object information is a target virtual obstacle object corresponding to the entity obstacle object.
In one embodiment, the processor, when executing the computer program, implements updating the three-dimensional model data based on the obstacle information, and may include: the obstacle information is used for triggering the cloud server to construct a virtual obstacle corresponding to the obstacle information according to the obstacle information and generate updating data; and receiving the update data fed back by the cloud server, and updating the three-dimensional model data based on the update data.
In one embodiment, after the processor executes the computer program to determine the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, the following steps can be further implemented: setting an object label of each virtual obstacle object; generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects; and generating a control instruction to control the sweeping robot to execute according to the sweeping route.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object; acquiring three-dimensional model data of an area to be cleaned, and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data; determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data; and determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
In one embodiment, the computer program, when executed by the processor, implements acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of the sweeping robot from the entity obstacle object, and the acquiring may include: extracting the features of the live-action image to obtain obstacle information of an entity obstacle object in the live-action image; the detection signal is transmitted to the entity obstacle through the detection signal transmitting and receiving device, and the distance information between the floor sweeping robot and the entity obstacle is determined according to the detection signal.
In one embodiment, the computer program, when executed by the processor, performs feature extraction on the live-action image to obtain feature information of the entity obstacle object in the live-action image, and may include: and performing feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of the entity obstacle object in the live-action image.
In one embodiment, the computer program, when executed by the processor, for transmitting a detection signal to the physical obstacle through the detection signal transceiver and determining distance information between the sweeping robot and the physical obstacle according to the detection signal, may include: transmitting a detection signal to the solid obstacle through the detection signal transmitting and receiving device, and receiving a reflection signal of the detection signal reflected by the solid obstacle; calculating a time difference according to the emission time of the detection signal and the receiving time of the reflection signal; and acquiring the propagation speed of the detection signal, and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
In one embodiment, the computer program when executed by the processor for implementing the determining the target virtual obstacle object corresponding to the physical obstacle object in the three-dimensional model data based on the obstacle information and the object information in the three-dimensional model data may include: judging whether object information corresponding to the obstacle information exists in the three-dimensional model data or not according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data; updating the three-dimensional model data based on the obstacle information when the object information corresponding to the obstacle information does not exist in the three-dimensional model data; and when the object information corresponding to the obstacle information exists in the virtual obstacle object, determining that the virtual obstacle object determined by the object information is a target virtual obstacle object corresponding to the entity obstacle object.
In one embodiment, the computer program when executed by the processor to implement updating the three-dimensional model data based on the obstacle information may include: the obstacle information is used for triggering the cloud server to construct a virtual obstacle corresponding to the obstacle information according to the obstacle information and generate updating data; and receiving the update data fed back by the cloud server, and updating the three-dimensional model data based on the update data.
In one embodiment, the computer program when executed by the processor, after determining the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, may further implement the steps of: setting an object label of each virtual obstacle object; generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects; and generating a control instruction to control the sweeping robot to execute according to the sweeping route.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for positioning a sweeping robot, the method comprising:
acquiring a live-action image of an area to be cleaned, and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object;
acquiring three-dimensional model data of an area to be cleaned, and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data;
determining a target virtual obstacle object corresponding to the entity obstacle object in the three-dimensional model data based on the obstacle information and each object information in the three-dimensional model data; wherein the obstacle information and the object information each include an object tag and size information, and determining the target virtual obstacle includes: searching for a virtual obstacle object corresponding to the entity obstacle object according to the object tag, and determining whether the searched virtual obstacle object is consistent with the entity obstacle object according to the size information so as to determine whether the searched virtual obstacle object is the target virtual obstacle object;
and determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
2. The method according to claim 1, wherein the collecting of the live-action image of the area to be cleaned and the extracting of the obstacle information of the solid obstacle object in the live-action image and the distance information of the sweeping robot from the solid obstacle object comprises:
extracting the features of the live-action image to obtain obstacle information of an entity obstacle object in the live-action image;
and transmitting a detection signal to the entity obstacle through a detection signal transmitting and receiving device, and determining distance information between the floor sweeping robot and the entity obstacle according to the detection signal.
3. The method according to claim 2, wherein the performing feature extraction on the live-action image to obtain feature information of a solid obstacle object in the live-action image comprises:
and performing feature extraction on the live-action image through a pre-trained neural network model to obtain feature information of the entity obstacle object in the live-action image.
4. The method according to claim 2, wherein the transmitting a probe signal to the physical obstacle through a probe signal transceiver and determining distance information between the sweeping robot and the physical obstacle according to the probe signal comprises:
transmitting a detection signal to the solid obstacle through a detection signal transmitting and receiving device, and receiving a reflection signal of the detection signal reflected by the solid obstacle;
calculating a time difference according to the emission time of the detection signal and the receiving time of the reflection signal;
and acquiring the propagation speed of the detection signal, and determining the distance information of the sweeping robot from the entity obstacle object based on the propagation speed and the time difference.
5. The method according to claim 1, wherein the determining a target virtual obstacle object in the three-dimensional model data corresponding to the physical obstacle object based on the obstacle information and each object information in the three-dimensional model data comprises:
judging whether object information corresponding to the obstacle information exists in the three-dimensional model data or not according to the obstacle information and the object information of each virtual obstacle in the three-dimensional model data;
updating the three-dimensional model data based on the obstacle information when the object information corresponding to the obstacle information does not exist in the three-dimensional model data;
and when the object information corresponding to the obstacle information exists in the virtual obstacle, determining that the virtual obstacle determined by the object information is a target virtual obstacle corresponding to the entity obstacle.
6. The method of claim 5, wherein said updating the three-dimensional model data based on the obstacle information comprises:
the obstacle information is sent to a cloud server, and the obstacle information is used for triggering the cloud server to construct a virtual obstacle corresponding to the obstacle information according to the obstacle information and generate updating data;
and receiving the update data fed back by the cloud server, and updating the three-dimensional model data based on the update data.
7. The method according to claim 1, wherein after determining the object information of each virtual obstacle object in the area to be cleaned based on the three-dimensional model data, further comprising:
setting an object label of each virtual obstacle object;
generating a cleaning route for cleaning the area to be cleaned according to the set labels and the virtual obstacle objects;
and generating a control instruction to control the sweeping robot to execute according to the sweeping route.
8. The utility model provides a robot positioner sweeps floor which characterized in that, the device includes:
the collection and extraction module is used for collecting a live-action image of an area to be cleaned and extracting obstacle information of an entity obstacle object in the live-action image and distance information of a sweeping robot from the entity obstacle object;
the system comprises an object information acquisition module, a data acquisition module and a data processing module, wherein the object information acquisition module is used for acquiring three-dimensional model data of an area to be cleaned and determining object information of each virtual obstacle in the area to be cleaned based on the three-dimensional model data;
a target virtual obstacle object determination module, configured to determine a target virtual obstacle object corresponding to the physical obstacle object in the three-dimensional model data based on the obstacle information and object information in the three-dimensional model data; wherein the obstacle information and the object information each include an object tag and size information, and determining the target virtual obstacle includes: searching a virtual obstacle object corresponding to the entity obstacle object according to the object tag, and determining whether the searched virtual obstacle object is consistent with the entity obstacle object according to the size information so as to determine whether the searched virtual obstacle object is the target virtual obstacle object;
and the position information determining module is used for determining the position information of the sweeping robot in the area to be swept according to the determined target virtual obstacle object and the distance information of the sweeping robot from the entity obstacle object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011184831.0A 2020-10-29 2020-10-29 Floor sweeping robot positioning method and device, computer equipment and storage medium Active CN112506182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011184831.0A CN112506182B (en) 2020-10-29 2020-10-29 Floor sweeping robot positioning method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011184831.0A CN112506182B (en) 2020-10-29 2020-10-29 Floor sweeping robot positioning method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112506182A CN112506182A (en) 2021-03-16
CN112506182B true CN112506182B (en) 2023-03-21

Family

ID=74954476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011184831.0A Active CN112506182B (en) 2020-10-29 2020-10-29 Floor sweeping robot positioning method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112506182B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024043831A1 (en) * 2022-08-23 2024-02-29 Nanyang Technological University Mobile robot initialization in a building based on a building information model (bim) of the building

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106197452A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment and system
CN107450557A (en) * 2017-09-10 2017-12-08 南京中高知识产权股份有限公司 A kind of sweeping robot method for searching based on high in the clouds memory
CN108247647B (en) * 2018-01-24 2021-06-22 速感科技(北京)有限公司 Cleaning robot
CN108763571B (en) * 2018-06-05 2021-02-05 北京智行者科技有限公司 Operation map updating method
US10611028B1 (en) * 2018-11-30 2020-04-07 NextVPU (Shanghai) Co., Ltd. Map building and positioning of robot
CN111609852A (en) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 Semantic map construction method, sweeping robot and electronic equipment
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras

Also Published As

Publication number Publication date
CN112506182A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN111060101B (en) Vision-assisted distance SLAM method and device and robot
US10482619B2 (en) Method and apparatus for combining data to construct a floor plan
JP5380789B2 (en) Information processing apparatus, information processing method, and computer program
Liang et al. Image based localization in indoor environments
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN104574386A (en) Indoor positioning method based on three-dimensional environment model matching
CN102257529B (en) Person-judging device and method
JP4880805B2 (en) Object position estimation apparatus, object position estimation method, and object position estimation program
JP2002048513A (en) Position detector, method of detecting position, and program for detecting position
JP2011022157A (en) Position detection apparatus, position detection method and position detection program
Liang et al. Image-based positioning of mobile devices in indoor environments
CN115205470B (en) Continuous scanning repositioning method, device, equipment, storage medium and three-dimensional continuous scanning method
KR20150127503A (en) Service providing system and method for recognizing object, apparatus and computer readable medium having computer program recorded therefor
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
CN112506182B (en) Floor sweeping robot positioning method and device, computer equipment and storage medium
Liang et al. Reduced-complexity data acquisition system for image-based localization in indoor environments
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium
JP2020052977A (en) Information processing device, information processing method, and program
CN112348944B (en) Three-dimensional model data updating method, device, computer equipment and storage medium
CN112200907B (en) Map data generation method and device for sweeping robot, computer equipment and medium
CN115700507B (en) Map updating method and device
CN113609985B (en) Object pose detection method, detection device, robot and storable medium
JP7444292B2 (en) Detection system, detection method, and program
Bailey et al. Simultaneous Localisation and Mapping (SLAM) Part 2: State of the Art
Dimitrova-Grekow et al. Indoor Mapping Using Sonar Sensor and Otsu Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant