CN112200907B - Map data generation method and device for sweeping robot, computer equipment and medium - Google Patents

Map data generation method and device for sweeping robot, computer equipment and medium Download PDF

Info

Publication number
CN112200907B
CN112200907B CN202011181048.9A CN202011181048A CN112200907B CN 112200907 B CN112200907 B CN 112200907B CN 202011181048 A CN202011181048 A CN 202011181048A CN 112200907 B CN112200907 B CN 112200907B
Authority
CN
China
Prior art keywords
component
dimensional model
model data
data
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011181048.9A
Other languages
Chinese (zh)
Other versions
CN112200907A (en
Inventor
尤勇敏
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Original Assignee
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuling Jiangsu Digital Intelligent Technology Co Ltd filed Critical Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority to CN202011181048.9A priority Critical patent/CN112200907B/en
Publication of CN112200907A publication Critical patent/CN112200907A/en
Application granted granted Critical
Publication of CN112200907B publication Critical patent/CN112200907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • G06T3/06

Abstract

The application relates to the technical field of robot control, in particular to a map data generation method and device for a sweeping robot, computer equipment and a storage medium. The method comprises the following steps: acquiring three-dimensional model data of a target object; acquiring component parameters of each component in the corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters; and converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot. By adopting the method, the accuracy of the map data can be improved.

Description

Map data generation method and device for sweeping robot, computer equipment and medium
Technical Field
The application relates to the technical field of intelligent home control, in particular to a map data generation method and device for a sweeping robot, computer equipment and a medium.
Background
With the rapid development of science and technology, smart homes are more and more widely applied, for example, sweeping robots and the like. For a sweeping robot, the cleaning capability of the ground is an important index for judging the performance of the sweeping robot.
In a traditional mode, when a sweeping robot executes a cleaning task for the first time, whole-house data are collected in the cleaning process, map modeling is carried out based on the collected whole-house data, and in the future cleaning process, the sweeping robot carries out path planning according to the built map.
In the method, the map is established only according to the data corresponding to the area acquired by the sweeping robot, and for the area which is not acquired, the corresponding map data cannot be established, so that the map is not established perfectly, and the map data is not accurate.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a map data generating method, apparatus, computer device and medium for a sweeping robot, which can improve the accuracy of map data.
A map data generation method for a sweeping robot, the method comprising:
acquiring three-dimensional model data of a target object;
acquiring component parameters of each component in the corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters;
and converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
In one embodiment, obtaining component parameters corresponding to components in the three-dimensional model data, and generating corresponding component attribute information based on the component parameters includes:
acquiring component parameters of each corresponding component in the three-dimensional model data from a database based on the corresponding relationship between the component and the component parameters;
and generating the part attribute information of the corresponding parts according to the parameters of the parts and the corresponding parts in the three-dimensional model.
In one embodiment, the method for obtaining map data for navigation of a sweeping robot by converting three-dimensional model data and part attribute information of each part in the three-dimensional model data into two-dimensional plane data includes:
establishing an incidence relation between each component in the three-dimensional model data and component attribute information corresponding to each component;
determining reference point coordinates in the three-dimensional model data, and converting each part in the three-dimensional model data into a corresponding plane part based on the reference point coordinates;
and acquiring the part attribute information corresponding to each part according to the association relation, and mapping the part attribute information to the corresponding plane part to obtain map data for the navigation of the sweeping robot.
In one embodiment, the method for obtaining map data for navigation of a sweeping robot by converting three-dimensional model data and part attribute information of each part in the three-dimensional model data into two-dimensional plane data includes:
acquiring component data of each component in the same three-dimensional space and component attribute information of a corresponding component according to the three-dimensional space where each component in the three-dimensional model data is located;
generating map data corresponding to the three-dimensional space according to the component data of each component in the same three-dimensional space and the component attribute information of the corresponding component;
and acquiring a space number of the three-dimensional space, and merging and storing the space number and the corresponding map data into a database.
In one embodiment, after the spatial numbers and the corresponding map data are merged and stored in the database, the method further includes:
receiving a configuration instruction of a configuration system to the sweeping robot, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot;
and inquiring and acquiring corresponding map data from the database according to the target space number, and sending the map data to the sweeping robot.
In one embodiment, the method further comprises the following steps:
receiving a live-action image uploaded by the sweeping robot, wherein the live-action image comprises a target obstacle object;
judging whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data or not according to the target obstacle object;
and when the three-dimensional model data does not have the target virtual obstacle object corresponding to the target obstacle object, updating the three-dimensional model data according to the live-action image.
In one embodiment, the updating of the three-dimensional model data from the live-action image comprises:
performing feature extraction on the live-action image to obtain object information of a target obstacle object in the live-action image;
constructing a virtual obstacle corresponding to the target obstacle according to the object information;
the three-dimensional model data is updated by the virtual obstacle object.
A sweeping robot map data generating device, the device comprising:
the three-dimensional model data acquisition module is used for acquiring three-dimensional model data of the target object;
the component attribute information generation module is used for acquiring component parameters of each component in the corresponding three-dimensional model data and generating corresponding component attribute information based on the component parameters;
and the map data generation module is used for converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the above embodiments when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the above embodiments.
According to the map data generation method, device, computer equipment and medium for the sweeping robot, the three-dimensional model data of the target object is obtained, then the component parameters of all the components in the corresponding three-dimensional model data are obtained, the corresponding component attribute information is generated based on the component parameters, and the three-dimensional model data and the component attribute information of all the components in the three-dimensional model data are further converted into the two-dimensional plane data, so that the map data for the sweeping robot navigation is obtained. Therefore, map data for the sweeping robot can be generated according to the acquired three-dimensional model data of the target object and the attribute information of each component generated by the component parameters of each component in the three-dimensional model data, and compared with the map building through the data acquired by the sweeping robot in the sweeping process, the integrity and the accuracy of the built map data can be improved.
Drawings
Fig. 1 is an application scenario diagram of a map data generation method of a sweeping robot in an embodiment;
fig. 2 is a schematic flow chart of a map data generation method of the sweeping robot in one embodiment;
fig. 3 is a schematic flow chart of a map data generation method of the sweeping robot in another embodiment;
fig. 4 is a schematic flow chart of the map data generation step of the sweeping robot in one embodiment;
FIG. 5 is a diagram illustrating coordinate transformation in one embodiment;
fig. 6 is a schematic flow chart of a map data generation step of the sweeping robot in another embodiment;
fig. 7 is a schematic flow chart of a map data generation method of the sweeping robot in another embodiment;
fig. 8 is a block diagram of a map data generating apparatus of the sweeping robot in one embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The map data generation method for the sweeping robot can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. Specifically, the server 104 may obtain three-dimensional model data of the target object, then obtain component parameters corresponding to each component in the three-dimensional model data, and generate corresponding component attribute information based on the component parameters, and further the server 104 converts the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot. The server 104 may obtain map data corresponding to the configuration request based on the configuration request of the terminal and transmit the map data to the terminal 102, so that the terminal 102 may perform a task based on the map data. The terminal 102 may be, but not limited to, various intelligent sweeping robots with sweeping capability, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a map data generating method for a sweeping robot is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S202, three-dimensional model data of the target object is acquired.
The target object is an object for which map data needs to be constructed, and may be, for example, all buildings in a building or a whole cell or buildings included in an area larger than the cell.
The three-dimensional model data refers to model data corresponding to a target object in an entity space, which is constructed by various Building Information Modeling (BIM) technologies. The three-dimensional model data may include data corresponding to different components such as wall surface data, ground surface, doors and windows.
In this embodiment, the server may construct corresponding three-dimensional model data based on the two-dimensional drawing data of the target object, store the three-dimensional model data in the server database, acquire the three-dimensional model data from the database based on the job instruction, and perform subsequent processing.
And S204, acquiring component parameters of each component in the corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters.
In this embodiment, the server may filter and obtain the component parameters of the corresponding component from the model database according to the component tag of each component in the three-dimensional model data, for example, for a wall, the server may obtain the component parameters of the corresponding wall from the model database.
In this embodiment, there may be multiple sets of component parameters for the same component, for example, for a wall, the component parameters may include white wall, gray wall, and green wall, and for a floor, the component parameters may include cement floor, tile floor, wood floor, or floor with carpet laid.
In this embodiment, the component parameters may further include attribute information such as a size and a friction coefficient of the component, for example, for a floor surface on which the carpet is laid, the component parameters may include a material of the carpet, a thickness of the carpet, a friction coefficient of the carpet, and the like.
Specifically, the server may determine component parameters corresponding to components in the three-dimensional model data based on a selection instruction of the user.
Further, the server may generate the part attribute information of the corresponding part based on information such as the size of the part in the three-dimensional model data and the part parameter of the corresponding part after determining the part parameter of the part. For example, for a wall, the server may generate component attribute information corresponding to the wall according to the length size of the wall and the corresponding component parameter; for the ground, the component attribute information corresponding to the ground can be generated according to the size area of the ground and the selected corresponding component information.
And step S206, converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
In this embodiment, after obtaining the three-dimensional model data and the component attribute information of each component in the three-dimensional model data, the server may convert the three-dimensional model data and the corresponding component attribute information of each component into two-dimensional plane data, for example, map the three-dimensional data to a two-dimensional plane, so as to obtain map data for the navigation of the sweeping robot.
Optionally, after the server acquires the three-dimensional model data, the server may further select a corresponding virtual furniture object from the furniture database, and place the selected virtual furniture object at a corresponding position, such as a table and chair, a bed, a cabinet, a tea table, a sofa, a kettle, and the like, according to the placement requirement of the corresponding entity furniture object in the target object.
Further, when the server generates map data for navigation of the sweeping robot, the server can map each virtual furniture object in the three-dimensional model data to a two-dimensional plane to generate corresponding map data.
In the map data generation method for the sweeping robot, the map data for the navigation of the sweeping robot is obtained by obtaining the three-dimensional model data of the target object, then obtaining the component parameters of each component in the corresponding three-dimensional model data, and generating the corresponding component attribute information based on the component parameters, and further converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data. Therefore, map data for the sweeping robot can be generated according to the acquired three-dimensional model data of the target object and the attribute information of each component generated by the component parameters of each component in the three-dimensional model data, and compared with the map building through the data acquired by the sweeping robot in the sweeping process, the integrity and the accuracy of the built map data can be improved.
In one embodiment, referring to fig. 3, the server may construct three-dimensional model data through each functional model in the building design software, for example, construct building data through a building design functional module, construct structural data through a structural design functional module, and design other model data of the target object through other professional design functional modules.
Further, after the server obtains the three-dimensional model data of the target object, the three-dimensional model data can be processed through a functional module in decoration design software, for example, the server divides the three-dimensional model data through a room design module to obtain division data of each room, ground material and height data can be obtained through a ground design functional module, the edge size data of the room is set through a skirting line design module, and the ground data under the door is configured through a door design module.
In this embodiment, the server may obtain the component parameters of each corresponding component according to each functional module and process the three-dimensional model data to obtain the three-dimensional model data including the component attribute information.
Optionally, the server may further process the three-dimensional model data according to attribute information of the articles provided in the cloud building article database, and perform layout on the virtual furniture objects installed in the three-dimensional model data through the furniture design module, so as to obtain corresponding indoor layout data.
In this embodiment, after obtaining the complete three-dimensional model data, the server may extract the data required by the map from the three-dimensional model data and perform merging processing to obtain the map data for the sweeping robot. For example, room division data, floor material and height data, indoor layout data, room edge size data, floor under door data, and the like are extracted from the complete three-dimensional model data to generate map data for the sweeping robot.
In one embodiment, obtaining component parameters corresponding to components in the three-dimensional model data, and generating corresponding component attribute information based on the component parameters may include: acquiring component parameters of each corresponding component in the three-dimensional model data from a database based on the corresponding relationship between the component and the component parameters; and generating component attribute information of each corresponding component according to the component parameters and the corresponding components in the three-dimensional model.
The correspondence between the components and the component parameters may be preset, for example, the server preset different component parameters corresponding to different components, and generates a data table to store in the database.
In this embodiment, after the server obtains the three-dimensional model data, the server may query the data table in the database according to each component in the three-dimensional model data, and obtain the component parameter corresponding to each component.
Further, the server generates corresponding component attribute information according to the size of each component in the three-dimensional model data and the component parameter of the corresponding component.
In one embodiment, referring to fig. 4, the step of converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot includes:
step S402, establishing the association relationship between each component in the three-dimensional model data and the component attribute information corresponding to each component.
In this embodiment, after obtaining the component attribute information of each component in the three-dimensional model data, the server may associate the component with the component attribute information to generate an association relationship between the component and the component attribute information. For example, for a bedroom floor component, the bedroom floor component may be associated with corresponding component attribute information.
Step S404, determining reference point coordinates in the three-dimensional model data, and converting each component in the three-dimensional model data into a corresponding plane component based on the reference point coordinates.
In this embodiment, before the server converts the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into the two-dimensional plane data, the reference point coordinates corresponding to the three-dimensional model data may be determined, or may also be referred to as reference point coordinates.
Further, the server may convert coordinates of each component in the three-dimensional model data into coordinates of which the observation point is an origin and the original origin and the observation point are spatial coordinates of the Z axis through spatial coordinate conversion. For example, referring to fig. 5, in the three-dimensional model data, O is the origin, P is the viewpoint, and after conversion, P is the origin and PO is the Z-axis.
Further, the server converts each component in the three-dimensional model data into a planar component in a reference point coordinate view, that is, converts the three-dimensional model data into two-dimensional planar data.
And S406, acquiring the part attribute information corresponding to each part according to the association relation, and mapping the part attribute information to the corresponding plane part to obtain map data for the navigation of the sweeping robot.
Specifically, the server may obtain the component attribute information corresponding to each component according to the association relationship between the component and the corresponding component attribute information, and map the component attribute information to a corresponding planar component, for example, when the server is used for a bedroom, the component attribute information corresponding to the bedroom ground is mapped to the planar component corresponding to the bedroom ground, and when the server is used for a desk, the component attribute information of the desk is mapped to the planar component corresponding to the desk.
In this embodiment, the server obtains plane data corresponding to the three-dimensional model data by traversing each component in the three-dimensional model data, that is, map data for the navigation of the sweeping robot.
In one embodiment, referring to fig. 6, converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot, includes:
step S602, acquiring component data of each component in the same three-dimensional space and component attribute information of the corresponding component according to the three-dimensional space where each component in the three-dimensional model data is located.
Specifically, the server may take a room or a suite of rooms in the target object as a three-dimensional space, and determine each component in each three-dimensional space in the three-dimensional model data.
Further, the server may obtain component data of components in the same three-dimensional space and component attribute information of corresponding components from the three-dimensional model data, for example, for a wall, a ground and corresponding component attribute information in the same space.
In step S604, map data corresponding to the three-dimensional space is generated based on the component data of each component in the same three-dimensional space and the component attribute information of the corresponding component.
In this embodiment, the server generates map data corresponding to a three-dimensional space, for example, map data corresponding to one room or a suite of rooms, based on the obtained component data of each component in each three-dimensional space and corresponding component attribute information.
Step S606, a space number of the three-dimensional space is obtained, and the space number and the corresponding map data are merged and stored in a database.
Specifically, the server may further obtain a space number corresponding to each three-dimensional space in the three-dimensional model data of the target object, for example, for each suite of rooms in a building, the server may correspondingly obtain the space number of each suite of rooms, and merge and store map data corresponding to each corresponding space number in the finger database. Such as map data for rooms 1-301, map data for rooms 5-611, etc.
In one embodiment, after merging and storing the spatial number and the corresponding map data into the database, the method may further include: receiving a configuration instruction of a configuration system to the sweeping robot, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot; and inquiring and acquiring corresponding map data from the database according to the target space number, and sending the map data to the sweeping robot.
The configuration system is a system specially used for carrying out service configuration on the sweeping robot.
In this embodiment, the system configures the sweeping robot through a configuration instruction, where the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot, such as 5-611.
In this embodiment, the same sweeping robot may be configured with space numbers of a plurality of three-dimensional spaces, and set to execute corresponding tasks on the three-dimensional spaces in different time periods.
Further, after receiving the configuration instruction of the configuration system, the server may query the database, obtain corresponding map data, for example, map data corresponding to 5-611, and send the map data to the sweeping robot.
In this embodiment, after the sweeping robot acquires the corresponding map data again, the path planning can be performed through the corresponding map data, and the corresponding sweeping task is executed after the path planning is completed.
In the above embodiment, the map data corresponding to the three-dimensional space is generated by obtaining the component data and the component attribute information of each component in each three-dimensional space, and then the map data and the space number corresponding to the three-dimensional space are merged and stored, so that the map data corresponding to the three-dimensional space can be directly configured according to the space number when the sweeping robot is subsequently subjected to map configuration, and the efficiency of map data configuration can be improved.
In one embodiment, referring to fig. 7, the method may further include:
step S702, receiving a live-action image uploaded by the sweeping robot, wherein the live-action image comprises a target obstacle object.
The live-action image is an image of the environment where the sweeping robot is located. In this embodiment, the server may collect live-action images of the environment where the sweeping robot is located through the image collecting device installed at the top of the sweeping robot, and send the live-action images to the server through the sweeping robot.
In this embodiment, the live-action image captured by the image capturing device may include a target obstacle object, for example, various objects such as a table, a chair, a kettle, a toy, and the like.
It will be understood by those skilled in the art that the target obstacle in the live-action image collected by the collecting device may be a single object or a plurality of objects, and the application is not limited thereto.
Step S704, determining whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data according to the target obstacle object.
In this embodiment, after acquiring the live-action image, the server may extract object information of the captured target obstacle object, such as point cloud data of the target obstacle object, which may include, but is not limited to, size information, position information, object tag, object name, and the like of the target obstacle object, from the live-action image.
Further, the server may query the three-dimensional model data according to the object information to determine whether a target virtual obstacle object corresponding to the target obstacle object exists in a plurality of virtual obstacle objects of the three-dimensional model data.
For example, the server may search for a virtual obstacle object corresponding to the physical obstacle object according to the object tag, and then determine whether the searched virtual obstacle object is consistent with the target obstacle object by comparing the size information, thereby determining whether the searched virtual obstacle object is a virtual obstacle object corresponding to the object information.
Further, the server can also obtain the distance information between the sweeping robot and the target obstacle object, and the relative position of the virtual sweeping robot and the virtual obstacle object corresponding to the object information in the three-dimensional model data.
Further, the server may compare whether the distance information coincides with the relative position to determine whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data. For example, when the distance information coincides with the relative position, the server may determine that a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data, and when the distance information does not coincide with the relative position, the server may determine that the target virtual obstacle object corresponding to the target obstacle object does not exist in the three-dimensional model data.
Step S706, when the target virtual obstacle object corresponding to the target obstacle object does not exist in the three-dimensional model data, updating the three-dimensional model data according to the live-action image.
Specifically, when the server determines that the target virtual obstacle object corresponding to the target obstacle object does not exist in the three-dimensional model data, the server may create the virtual obstacle object corresponding to the target obstacle object according to object information obtained from the live-action data, for example, create the virtual obstacle object corresponding to the target obstacle object according to the length and width dimensions, color information, material quality, and the like of the target object, and then update the three-dimensional model data according to the created virtual obstacle object.
In the above embodiment, the real-scene image uploaded by the sweeping robot is received, and then whether the target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data is judged according to the target obstacle object in the real-scene image, and when the target virtual obstacle object corresponding to the target obstacle object does not exist in the three-dimensional model data, the three-dimensional model data is updated according to the real-scene image, so that when the position of an object in the real-scene environment changes or an object is newly added, the three-dimensional model data can be updated in time, and the accuracy of the three-dimensional model data is improved.
In one embodiment, updating the three-dimensional model data according to the live-action image may include: performing feature extraction on the live-action image to obtain object information of a target obstacle object in the live-action image; constructing a virtual obstacle corresponding to the target obstacle according to the object information; the three-dimensional model data is updated by the virtual obstacle object.
In this embodiment, the server may obtain object information of the target obstacle object in the live-action image by performing feature extraction on the live-action image, for example, extracting size information, color information, texture attribute information, and the like of the target obstacle object in the live-action image.
In this embodiment, the feature extraction of the live-action image by the server may be performed by a neural network model, for example, a Center Net network model.
Specifically, before the live-action image feature extraction, the server may pre-train and test the constructed initial neural network model through training set data.
In this embodiment, after the neural network model is trained and predicted, the server may input the live-action image to the neural network model, and perform continuous multi-scale feature extraction to obtain feature maps corresponding to a plurality of different scales.
Further, the server performs feature fusion on feature graphs of two adjacent orders in sequence according to the sequence from the high-order layer to the low-order layer to obtain fusion features corresponding to all scales.
Further, the server performs regression processing on the feature maps of the scales respectively to obtain regression results of the fusion features corresponding to the scales. The server may perform post-processing using a Non-Maximum Suppression criterion (NMS), screen a plurality of regression results, and obtain object information of the target obstacle object in the live-action image based on the screened regression results.
In this embodiment, to improve the accuracy of processing the neural network model, the server may perform preprocessing on the live-action image before inputting the live-action image into the neural network model, for example, adjust the size of the live-action image, so that the adjusted size of the live-action image meets the input requirement of the neural network model.
Further, after the server acquires the object information, it may construct a corresponding virtual obstacle object according to the length, width, and height of the target obstacle object included in the object information, and then update the virtual obstacle object into the three-dimensional model data according to the corresponding position information.
In the above embodiment, the object information of the target obstacle object in the live-action image is obtained by performing feature extraction on the live-action image, then the virtual obstacle object corresponding to the target obstacle object is constructed according to the object information, and the three-dimensional model data is updated through the virtual obstacle object, so that the three-dimensional model data can be updated more accurately, and the accuracy of the constructed three-dimensional model data is improved.
It should be understood that, although the steps in the flowcharts of fig. 2, 4, 6 and 7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 2, 4, 6, and 7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a sweeping robot map data generating device, including: a three-dimensional model data acquisition module 100, a component attribute information generation module 200, and a map data generation module 300, wherein:
a three-dimensional model data obtaining module 100, configured to obtain three-dimensional model data of a target object.
And a component attribute information generating module 200, configured to obtain component parameters corresponding to each component in the three-dimensional model data, and generate corresponding component attribute information based on the component parameters.
The map data generation module 300 is configured to convert the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data, so as to obtain map data for the navigation of the sweeping robot.
In one embodiment, the component attribute information generation module 200 may include:
and the component parameter acquisition submodule is used for acquiring component parameters of each corresponding component in the three-dimensional model data from the database based on the corresponding relation between the components and the component parameters.
And the component attribute information generation submodule is used for generating component attribute information corresponding to each component according to each component parameter and each corresponding component in the three-dimensional model.
In one embodiment, the map data generation module 300 may include:
and the incidence relation establishing submodule is used for establishing incidence relations among all the parts in the three-dimensional model data and the part attribute information corresponding to all the parts.
And the plane part generation submodule is used for determining the reference point coordinates in the three-dimensional model data and converting each part in the three-dimensional model data into a corresponding plane part based on the reference point coordinates.
And the map data generation submodule is used for acquiring the part attribute information corresponding to each part according to the association relation, and mapping the part attribute information to the corresponding plane part to obtain the map data for the navigation of the sweeping robot.
In one embodiment, the map data generation module 300 may include:
and the obtaining submodule is used for obtaining component data of each component in the same three-dimensional space and component attribute information of the corresponding component according to the three-dimensional space where each component in the three-dimensional model data is located.
And the three-dimensional space map data generation submodule is used for generating map data corresponding to the three-dimensional space according to the component data of each component in the same three-dimensional space and the component attribute information of the corresponding component.
And the storage submodule is used for acquiring the space number of the three-dimensional space and merging and storing the space number and the corresponding map data into the database.
In one embodiment, the apparatus may further include:
and the configuration instruction receiving module is used for receiving a configuration instruction of the configuration system to the sweeping robot after the storage sub-module merges and stores the space number and the corresponding map data into the database, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot.
And the sending module is used for inquiring and acquiring corresponding map data from the database according to the target space number and sending the map data to the sweeping robot.
In one embodiment, the apparatus may further include:
and the live-action image receiving module is used for receiving the live-action image uploaded by the sweeping robot, and the live-action image comprises the target obstacle object.
And the judging module is used for judging whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data or not according to the target obstacle object.
And the updating module is used for updating the three-dimensional model data according to the live-action image when the target virtual obstacle object corresponding to the target obstacle object does not exist in the three-dimensional model data.
In one embodiment, the update module may include:
and the characteristic extraction submodule is used for extracting the characteristics of the live-action image to obtain the object information of the target obstacle object in the live-action image.
And the virtual obstacle object submodule is used for constructing a virtual obstacle object corresponding to the target obstacle object according to the object information.
And the updating submodule is used for updating the three-dimensional model data through the virtual obstacle object.
For specific limitations of the map data generating device of the sweeping robot, reference may be made to the above limitations of the map data generating method of the sweeping robot, which are not described herein again. All or part of the modules in the map data generating device of the sweeping robot can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as three-dimensional model data, component parameters, component attribute information, map data, and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a map data generation method for a sweeping robot.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring three-dimensional model data of a target object; acquiring component parameters of each component in the corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters; and converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
In one embodiment, the obtaining component parameters of each component in the corresponding three-dimensional model data and generating corresponding component attribute information based on the component parameters when the processor executes the computer program may include: acquiring component parameters of each corresponding component in the three-dimensional model data from a database based on the corresponding relationship between the component and the component parameters; and generating component attribute information of each corresponding component according to the component parameters and the corresponding components in the three-dimensional model.
In one embodiment, when the processor executes the computer program, the method for converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot includes: establishing an incidence relation between each component in the three-dimensional model data and component attribute information corresponding to each component; determining reference point coordinates in the three-dimensional model data, and converting each part in the three-dimensional model data into a corresponding plane part based on the reference point coordinates; and acquiring the part attribute information corresponding to each part according to the association relation, and mapping the part attribute information to the corresponding plane part to obtain map data for the navigation of the sweeping robot.
In one embodiment, when the processor executes the computer program, the method for converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot includes: acquiring component data of each component in the same three-dimensional space and component attribute information of a corresponding component according to the three-dimensional space where each component in the three-dimensional model data is located; generating map data corresponding to the three-dimensional space according to the component data of each component in the same three-dimensional space and the component attribute information of the corresponding component; and acquiring a space number of the three-dimensional space, and merging and storing the space number and the corresponding map data into a database.
In one embodiment, after the processor executes the computer program to implement merging and storing the space number and the corresponding map data into the database, the following steps can be further implemented: receiving a configuration instruction of a configuration system to the sweeping robot, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot; and inquiring and acquiring corresponding map data from the database according to the target space number, and sending the map data to the sweeping robot.
In one embodiment, the processor when executing the computer program can further implement the following steps: receiving a live-action image uploaded by the sweeping robot, wherein the live-action image comprises a target obstacle object; judging whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data or not according to the target obstacle object; and when the three-dimensional model data does not have the target virtual obstacle object corresponding to the target obstacle object, updating the three-dimensional model data according to the live-action image.
In one embodiment, the processor, when executing the computer program, implements updating the three-dimensional model data according to the live-action image, and may include: performing feature extraction on the live-action image to obtain object information of a target obstacle object in the live-action image; constructing a virtual obstacle object corresponding to the target obstacle object according to the object information; the three-dimensional model data is updated by the virtual obstacle object.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of: acquiring three-dimensional model data of a target object; acquiring component parameters of each component in the corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters; and converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
In one embodiment, the computer program, when executed by the processor, is configured to obtain component parameters corresponding to components in the three-dimensional model data, and generate corresponding component attribute information based on the component parameters, and may include: acquiring component parameters of each corresponding component in the three-dimensional model data from a database based on the corresponding relationship between the component and the component parameters; and generating component attribute information of each corresponding component according to the component parameters and the corresponding components in the three-dimensional model.
In one embodiment, when executed by the processor, the computer program may be configured to convert the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data, so as to obtain map data for navigation of the sweeping robot, where the map data may include: establishing an incidence relation between each component in the three-dimensional model data and component attribute information corresponding to each component; determining reference point coordinates in the three-dimensional model data, and converting each part in the three-dimensional model data into a corresponding plane part based on the reference point coordinates; and acquiring the part attribute information corresponding to each part according to the association relation, and mapping the part attribute information to the corresponding plane part to obtain map data for the navigation of the sweeping robot.
In one embodiment, when executed by the processor, the computer program implements converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data, and obtaining map data for navigation of the sweeping robot may include: acquiring component data of each component in the same three-dimensional space and component attribute information of a corresponding component according to the three-dimensional space where each component in the three-dimensional model data is located; generating map data corresponding to the three-dimensional space according to the component data of each component in the same three-dimensional space and the component attribute information of the corresponding component; and acquiring a space number of the three-dimensional space, and merging and storing the space number and the corresponding map data into a database.
In one embodiment, after the computer program is executed by the processor to implement merging and storing the space number and the corresponding map data into the database, the following steps can be further implemented: receiving a configuration instruction of a configuration system to the sweeping robot, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot; and inquiring and acquiring corresponding map data from the database according to the target space number, and sending the map data to the sweeping robot.
In one embodiment, the computer program when executed by the processor further performs the steps of: receiving a live-action image uploaded by the sweeping robot, wherein the live-action image comprises a target obstacle object; judging whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data or not according to the target obstacle object; and when the three-dimensional model data does not have the target virtual obstacle object corresponding to the target obstacle object, updating the three-dimensional model data according to the live-action image.
In one embodiment, the computer program when executed by the processor to implement updating three-dimensional model data from live-action images may include: performing feature extraction on the live-action image to obtain object information of a target obstacle object in the live-action image; constructing a virtual obstacle corresponding to the target obstacle according to the object information; the three-dimensional model data is updated by the virtual obstacle object.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A map data generation method for a sweeping robot is characterized by comprising the following steps:
acquiring three-dimensional model data of a target object;
acquiring component parameters of each component in corresponding three-dimensional model data, and generating corresponding component attribute information based on the component parameters;
and converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
2. The method of claim 1, wherein obtaining component parameters corresponding to components in the three-dimensional model data and generating corresponding component attribute information based on the component parameters comprises:
acquiring component parameters corresponding to each component in the three-dimensional model data from a database based on the corresponding relation between the components and the component parameters;
and generating component attribute information of each corresponding component according to each component parameter and each corresponding component in the three-dimensional model.
3. The method according to claim 1, wherein the converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for navigation of the sweeping robot comprises:
establishing an incidence relation between each component in the three-dimensional model data and component attribute information corresponding to each component;
determining reference point coordinates in the three-dimensional model data, and converting each component in the three-dimensional model data into a corresponding planar component based on the reference point coordinates;
and acquiring the part attribute information corresponding to each part according to the incidence relation, and mapping the part attribute information to the corresponding plane part to obtain map data for the navigation of the sweeping robot.
4. The method according to claim 1, wherein the converting the three-dimensional model data and the component attribute information of each component in the three-dimensional model data into two-dimensional plane data to obtain map data for navigation of the sweeping robot comprises:
acquiring component data of each component in the same three-dimensional space and component attribute information corresponding to the component according to the three-dimensional space where each component in the three-dimensional model data is located;
generating map data corresponding to the three-dimensional space according to the component data of each component in the same three-dimensional space and the component attribute information corresponding to the component;
and acquiring a space number of the three-dimensional space, and merging and storing the space number and the corresponding map data into a database.
5. The method of claim 4, wherein after storing the spatial number and the corresponding map data in a database, the method further comprises:
receiving a configuration instruction of a configuration system to a sweeping robot, wherein the configuration instruction carries a target space number of a three-dimensional space to be configured of the sweeping robot;
and inquiring and acquiring corresponding map data from the database according to the target space number, and sending the map data to the sweeping robot.
6. The method of claim 5, further comprising:
receiving a live-action image uploaded by a sweeping robot, wherein the live-action image comprises a target obstacle object;
judging whether a target virtual obstacle object corresponding to the target obstacle object exists in the three-dimensional model data or not according to the target obstacle object;
and when the three-dimensional model data does not have a target virtual obstacle object corresponding to the target obstacle object, updating the three-dimensional model data according to the live-action image.
7. The method of claim 6, wherein said updating said three-dimensional model data from said live-action image comprises:
performing feature extraction on the live-action image to obtain object information of a target obstacle object in the live-action image;
constructing a virtual obstacle object corresponding to the target obstacle object according to the object information;
updating the three-dimensional model data by the virtual obstacle object.
8. A map data generation device of a sweeping robot is characterized in that the device comprises:
the three-dimensional model data acquisition module is used for acquiring three-dimensional model data of the target object;
the component attribute information generation module is used for acquiring component parameters of each component in the corresponding three-dimensional model data and generating corresponding component attribute information based on the component parameters;
and the map data generation module is used for converting the three-dimensional model data and the part attribute information of each part in the three-dimensional model data into two-dimensional plane data to obtain map data for the navigation of the sweeping robot.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011181048.9A 2020-10-29 2020-10-29 Map data generation method and device for sweeping robot, computer equipment and medium Active CN112200907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011181048.9A CN112200907B (en) 2020-10-29 2020-10-29 Map data generation method and device for sweeping robot, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011181048.9A CN112200907B (en) 2020-10-29 2020-10-29 Map data generation method and device for sweeping robot, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN112200907A CN112200907A (en) 2021-01-08
CN112200907B true CN112200907B (en) 2022-05-27

Family

ID=74011945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011181048.9A Active CN112200907B (en) 2020-10-29 2020-10-29 Map data generation method and device for sweeping robot, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN112200907B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373046B (en) * 2021-12-27 2023-08-18 达闼机器人股份有限公司 Method, device and storage medium for assisting robot operation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111006676A (en) * 2019-11-14 2020-04-14 广东博智林机器人有限公司 Map construction method, device and system
CN111521184A (en) * 2020-04-13 2020-08-11 轻客小觅机器人科技(成都)有限公司 Map building method, device and system of sweeping robot
WO2020200282A1 (en) * 2019-04-02 2020-10-08 北京石头世纪科技股份有限公司 Robot working area map constructing method and apparatus, robot, and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020200282A1 (en) * 2019-04-02 2020-10-08 北京石头世纪科技股份有限公司 Robot working area map constructing method and apparatus, robot, and medium
CN111006676A (en) * 2019-11-14 2020-04-14 广东博智林机器人有限公司 Map construction method, device and system
CN111521184A (en) * 2020-04-13 2020-08-11 轻客小觅机器人科技(成都)有限公司 Map building method, device and system of sweeping robot

Also Published As

Publication number Publication date
CN112200907A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
US20230409769A1 (en) System and Method for Generating Computerized Models of Structures Using Geometry Extraction and Reconstruction Techniques
US10803659B2 (en) Automatic three-dimensional solid modeling method and program based on two-dimensional drawing
CN111543902B (en) Floor cleaning method and device, intelligent cleaning equipment and storage medium
Hong et al. Semi-automated approach to indoor mapping for 3D as-built building information modeling
CN108297115B (en) Autonomous repositioning method for robot
CN112336254B (en) Cleaning strategy generation method and device for sweeping robot, computer equipment and medium
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN112200907B (en) Map data generation method and device for sweeping robot, computer equipment and medium
KR101558455B1 (en) Area calculating method by type of building based on Building Information Modeling
Kaufmann et al. ScaleBIM: Introducing a scalable modular framework to transfer point clouds into semantically rich building information models
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium
CN109934866A (en) A kind of extracting method, copy method and the device of figure spot partition mode
CN112348944B (en) Three-dimensional model data updating method, device, computer equipment and storage medium
CN112002007A (en) Model obtaining method and device based on air-ground image, equipment and storage medium
CN112506182B (en) Floor sweeping robot positioning method and device, computer equipment and storage medium
Zhu et al. Potentials of RGB-D cameras in as-built indoor environment modeling
WO2015073347A1 (en) Photovoltaic shade impact prediction
CN115270240A (en) Power transmission area visualization implementation method and device, computer equipment and storage medium
CN114580044A (en) Building outer surface data acquisition method, building outer surface data acquisition device, computer equipment and medium
JP7421188B2 (en) Programs, recording media, and systems
JAMAL et al. Scan-to-BIM approach towards producing quantity take-off of heritage buildings in Malaysia
Li et al. Design of Computer Vision-Based Construction Progress Monitoring System
CN117095279A (en) Robot repositioning method and device, storage medium and electronic device
CN116993827A (en) Repositioning method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant