CN112348944B - Three-dimensional model data updating method, device, computer equipment and storage medium - Google Patents

Three-dimensional model data updating method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112348944B
CN112348944B CN202011184844.8A CN202011184844A CN112348944B CN 112348944 B CN112348944 B CN 112348944B CN 202011184844 A CN202011184844 A CN 202011184844A CN 112348944 B CN112348944 B CN 112348944B
Authority
CN
China
Prior art keywords
dimensional model
model data
target object
information
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011184844.8A
Other languages
Chinese (zh)
Other versions
CN112348944A (en
Inventor
尤勇敏
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Original Assignee
Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuling Jiangsu Digital Intelligent Technology Co Ltd filed Critical Jiuling Jiangsu Digital Intelligent Technology Co Ltd
Priority to CN202011184844.8A priority Critical patent/CN112348944B/en
Publication of CN112348944A publication Critical patent/CN112348944A/en
Application granted granted Critical
Publication of CN112348944B publication Critical patent/CN112348944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application relates to the technical field of intelligent home, in particular to a three-dimensional model data updating method, a three-dimensional model data updating device, computer equipment and a storage medium. The method comprises the following steps: acquiring initial three-dimensional model data of an entity space; acquiring a live-action image of an entity space, wherein the live-action image comprises a target object; determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image; judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information; and when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, creating the virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data. By adopting the method, the accuracy of the three-dimensional model data can be improved.

Description

Three-dimensional model data updating method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of intelligent home, in particular to a three-dimensional model data updating method, a three-dimensional model data updating device, computer equipment and a storage medium.
Background
With the rapid development of science and technology, the application of self-moving equipment is more and more extensive, for example, a sweeping robot and the like. The self-moving apparatus can take the three-dimensional model data created in advance as map data and perform a cleaning task.
However, with the migration of time, objects in the solid space may be added or moved to other positions, so that the objects in the three-dimensional model data do not completely correspond to the objects in the solid space, and the three-dimensional model data is not accurate.
Disclosure of Invention
In view of the above, it is necessary to provide a three-dimensional model data updating method, apparatus, computer device and medium capable of improving the accuracy of three-dimensional model data in order to solve the above technical problems.
A three-dimensional model data updating method, the method comprising:
acquiring initial three-dimensional model data of an entity space;
acquiring a real image of a physical space, wherein the real image comprises a target object;
determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image;
Judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information;
and when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, creating the virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the live-action image includes a color image and depth data;
according to the live-action image, determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space, wherein the method comprises the following steps:
performing feature extraction on the color image in the live-action image to obtain object information of a target object in the live-action image;
and calculating distance information between the target object and the self-moving equipment according to the depth data in the live-action image.
In one embodiment, determining whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information includes:
inquiring whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information;
When a virtual object corresponding to the object information exists in the initial three-dimensional model data, acquiring a relative position between the virtual object and the virtual self-moving equipment in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information;
and when the relative position is inconsistent with the distance information, determining that a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, after querying whether a virtual object corresponding to the object information exists in the initial three-dimensional model data according to the object information, the method further includes:
and when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information includes:
sending the object information and the distance information to a server, wherein the object information and the distance information are used for indicating the server to create a virtual object corresponding to the target object according to the object information and updating initial three-dimensional model data according to the created virtual object and the distance information;
And receiving the updated three-dimensional model data fed back by the server.
In one embodiment, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data, includes:
creating a virtual object corresponding to the target object based on the object information, and acquiring a target category label of the virtual object corresponding to the target object according to the corresponding relationship between the object information and the category label;
and updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
In one embodiment, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data, includes:
creating a virtual object corresponding to the target object according to the object information, wherein the created virtual object comprises an object orientation;
acquiring orientation information of the mobile equipment when the live-action image is acquired;
and determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the self-moving equipment and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the method further comprises the following steps:
acquiring the acquisition time of a live-action image;
and associating the acquisition time with the created virtual object corresponding to the target object.
A three-dimensional model data updating apparatus, the apparatus comprising:
the initial three-dimensional model data acquisition module is used for acquiring initial three-dimensional model data of the entity space;
the real-scene image acquisition module is used for acquiring a real-scene image of the entity space, wherein the real-scene image comprises a target object;
the distance information determining module is used for determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image;
the judging module is used for judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information;
and the updating module is used for creating a virtual object corresponding to the target object according to the object information when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments described above when the computer program is executed by the processor.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the embodiments described above.
The three-dimensional model data updating method, the three-dimensional model data updating device, the computer equipment and the storage medium acquire initial three-dimensional model data of a physical space, acquire a live-action image of the physical space, wherein the live-action image comprises a target object, then determine object information of the target object in the live-action image and distance information between the target object in the physical space and the self-moving equipment according to the live-action image, judge whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information, further establish a virtual object corresponding to the target object according to the object information when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, update the initial three-dimensional model data according to the established virtual object and the distance information, and obtain updated three-dimensional model data. Therefore, the initial three-dimensional model data can be judged according to the object information and the distance information obtained from the live-action image, when the initial three-dimensional model data is judged to have no virtual target object corresponding to the target object, the virtual object corresponding to the target object is created according to the object information, and the initial three-dimensional model data is updated according to the created virtual object and the distance information, so that the accuracy of the three-dimensional model data is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary application of a method for updating three-dimensional model data;
FIG. 2 is a schematic flow chart of a method for updating three-dimensional model data according to an embodiment;
FIG. 3 is a diagram illustrating the relationship between the digital twin model space and the entity space in one embodiment;
FIG. 4 is a schematic flow chart of a three-dimensional model data updating method according to an embodiment;
FIG. 5 is a flowchart illustrating a method for updating three-dimensional model data according to another embodiment;
FIG. 6 is a block diagram showing the structure of a three-dimensional model data updating apparatus according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The three-dimensional model data updating method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The server 104 may construct initial three-dimensional model data of the physical space and transmit the initial three-dimensional model data to the terminal 102. The terminal 102 may acquire a live-action image of the physical space, and the live-action image may include the target object. Further, the terminal 102 may determine object information of a target object in the live view image and distance information between the target object and the mobile device in the physical space according to the live view image, and determine whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information. Further, when the terminal 102 determines that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, the terminal creates a virtual object corresponding to the target object according to the object information, and updates the initial three-dimensional model data according to the created virtual object and the distance information, so as to obtain updated three-dimensional model data. The terminal 102 may be a self-moving device, specifically, a cleaning device such as a cleaning robot, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a three-dimensional model data updating method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S202, obtaining initial three-dimensional model data of the entity space.
The physical space refers to a real space, for example, a space environment including various household devices.
The three-dimensional model data refers to model data created by various Building Information Modeling (BIM) technologies. Referring to fig. 3, the three-dimensional model data is a digital twin model space completely identical to a solid space, and virtual objects corresponding to respective solid objects in an actual area to be cleaned, that is, furniture corresponding to furniture in the solid space, may be included in the three-dimensional model data. The three-dimensional model data may further include data such as names, materials, position information, and related size parameters of each virtual object, for example, a wall, various furniture appliances, and the like, and may further include an a device model corresponding to a device a for cleaning a room, that is, a self-moving device, such as a sweeping robot, and the like.
The initial three-dimensional model data refers to model data of a real world space generated in a past time period and corresponding to the past time period.
In this embodiment, the server may construct the initial three-dimensional model data in advance, and store the initial three-dimensional model data in the server database.
Further, the server inquires and acquires the initial three-dimensional model data from the database based on the acquisition instruction of the terminal, and sends the initial three-dimensional model data to the terminal so as to perform subsequent processing through the terminal.
Step S204, a real image of the entity space is collected, wherein the real image comprises a target object.
The live-action image refers to an image of a live-action space acquired by the acquisition equipment. The live-action image may include a target object therein.
In this embodiment, an image capturing device may be installed on the top of the mobile device, so as to implement the live-action image captured from the live-action space of the mobile device by the image capturing device.
In this embodiment, the target objects included in the live view image may be a plurality of objects acquired by one, which is not limited in this application.
Step S206, according to the live-action image, object information of the target object in the live-action image and distance information between the target object and the mobile device in the entity space are determined.
The object information may refer to point cloud data of a target object, and may include, but is not limited to, size information, position information, an object tag, an object name, and the like of the target object.
The distance information refers to position information between the target object and the mobile device in the real space, and may be a relative position between the target object and the mobile device.
In this embodiment, after the terminal acquires the live-action image, object information of the captured entity obstacle object may be extracted from the live-action image, for example, through a neural network model or other image recognition techniques.
Further, the terminal may determine the distance information between the target object and the mobile device in the physical space in various ways, for example, the terminal may obtain information according to the camera and the infrared sensor, and then assist with an image monocular distance measurement algorithm to calculate the distance information between the target object and the mobile device.
Step S208, judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information.
As described above, the initial three-dimensional model data may include a wall and virtual objects corresponding to various home appliances.
In this embodiment, the terminal may determine whether a virtual target object corresponding to the target object exists in existing virtual objects in the initial three-dimensional model data according to the acquired object information and distance information of the target object.
Step S210, when there is no virtual target object corresponding to the target object in the initial three-dimensional model data, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information, to obtain updated three-dimensional model data.
In this embodiment, when the terminal determines that the virtual target object corresponding to the target object does not exist in the three-dimensional model data, the terminal may create the virtual object corresponding to the target object according to the object information, for example, create the virtual object corresponding to the target object according to the length, width, color information, material, and the like of the target object.
Further, the terminal may determine, according to the distance information corresponding to the target object, the position information of the virtual object corresponding to the target object in the initial three-dimensional model data, so as to update the three-dimensional model data by the virtual object corresponding to the target object, thereby obtaining updated three-dimensional model data.
In the three-dimensional model data updating method, initial three-dimensional model data of a physical space is obtained, a live-action image of the physical space is collected, the live-action image comprises a target object, then object information of the target object in the live-action image and distance information between the target object in the physical space and a mobile device are determined according to the live-action image, whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not is judged according to the object information and the distance information, further when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, a virtual object corresponding to the target object is created according to the object information, the initial three-dimensional model data is updated according to the created virtual object and the distance information, and updated three-dimensional model data is obtained. Therefore, the initial three-dimensional model data can be judged according to the object information and the distance information obtained from the live-action image, when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, the virtual object corresponding to the target object is created according to the object information, and the initial three-dimensional model data is updated according to the created virtual object and the distance information, so that the accuracy of the three-dimensional model data is improved.
In one embodiment, the live view image may include a color image and depth data.
In this embodiment, the capturing device for capturing the real-scene image may be a depth camera, and the real-scene image captured by the depth camera may include a color image and depth data, that is, an RGB image and depth information.
In this embodiment, determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the physical space according to the live-action image may include: performing feature extraction on the color image in the live-action image to obtain object information of a target object in the live-action image; and calculating distance information between the target object and the self-moving equipment according to the depth data in the live-action image.
In this embodiment, the terminal may perform feature extraction on the acquired color image, for example, extract object information of the target object from the color image through a pre-trained neural network model and the like.
Specifically, the terminal may perform multi-scale feature extraction on the live-action image to obtain image features of multiple scales, and then the terminal may perform feature fusion on the image features of two adjacent scales in a layer-by-layer fusion manner by using two adjacent-step features to obtain fusion features of multiple scales. Further, the terminal may perform regression processing on the fusion features of each scale to obtain regression results corresponding to the fusion features of each scale, and screen the multiple regression results to obtain object information of the target object in the live-action image based on the screened regression results.
In this embodiment, the neural network model trained in advance may be a Center Net network model, a Res Net network model, or the like, which is not limited in this application.
Further, the terminal may determine distance information between the target object and the self-moving device according to the depth data in combination with camera parameters of the depth camera.
In the above embodiment, the object information of the target object in the live-action image is obtained by performing feature extraction on the color image in the live-action image, and the distance information between the target object and the mobile device is calculated according to the depth data in the live-action image, so that the object information and the distance information can be determined according to the live-action image, the data processing process is simple, manual participation is not required in the processing process, and the intelligence level of data processing can be improved.
In one embodiment, the distance information between the target object and the mobile device can also be determined by installing a signal transceiver on the top of the mobile device and by transmitting the detection signal and the reflected signal reflected by the target object.
In one embodiment, determining whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information may include: inquiring whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information; when a virtual object corresponding to the object information exists in the initial three-dimensional model data, acquiring a relative position between the virtual object and the virtual self-moving equipment in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information; and when the relative position is inconsistent with the distance information, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
Specifically, the terminal may query the initial three-dimensional model data according to the object information, and determine whether a virtual object corresponding to the object information exists in the initial three-dimensional model data. For example, according to an object tag such as a table, a chair, a toy, etc., it is queried whether a corresponding virtual object exists in the initial three-dimensional model data.
In this embodiment, when the terminal queries the initial three-dimensional model data and queries a virtual object corresponding to the object information from the initial three-dimensional model data, it may be determined that the virtual object corresponding to the object information exists in the initial three-dimensional model data. However, in this case, the terminal cannot determine that the virtual object is the target virtual object corresponding to the target object, for example, when the target object moves in the physical space, the corresponding virtual object still exists in the initial three-dimensional model data, but the position of the corresponding virtual object does not correspond to the position of the target object.
Therefore, the terminal can acquire the relative position between the virtual object and the virtual self-moving device in the initial three-dimensional model data based on the initial three-dimensional model data and judge whether the relative position is consistent with the distance information.
In the present embodiment, the virtual self-moving device is set to move along with the movement of the self-moving device in the physical space in the initial three-dimensional model data, that is, the position of the virtual self-moving device and the position of the self-moving device in the physical space are completely consistent all the time in the initial three-dimensional model data.
In this embodiment, after acquiring the relative position between the virtual object and the virtual self-moving device in the initial three-dimensional model data, the server may compare the relative position with the distance information to determine whether the relative position is consistent with the distance information.
In this embodiment, when the terminal determines that the relative position is inconsistent with the distance information, it may be determined that a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, and the terminal may update the initial three-dimensional model data according to the object information.
In one embodiment, after querying whether a virtual object corresponding to the object information exists in the initial three-dimensional model data according to the object information, the method may further include: and when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In this embodiment, when the terminal does not inquire the corresponding virtual object from the initial three-dimensional model data according to the object information, it may determine that the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, and then the terminal may determine that the target object is a newly added object, and the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
Further, the server may create a corresponding virtual object according to the object information of the target object, and update the model.
In the above embodiment, by determining whether a virtual object corresponding to the object information exists in the initial three-dimensional model data, and when a virtual object corresponding to the object information exists in the initial three-dimensional model data, obtaining a relative position between the virtual object and the virtual self-moving device in the initial three-dimensional model data, and determining whether the relative position is consistent with the distance information, it can be accurately determined whether a target virtual object corresponding to the target object exists in the three-dimensional model data, and the accuracy of determining the target virtual object can be improved, so as to improve the accuracy of updating the three-dimensional model data.
In one embodiment, referring to fig. 4, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information may include:
step S402, sending the object information and the distance information to a server, wherein the object information and the distance information are used for instructing the server to create a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information.
The server may be a cloud server. In this embodiment, when the server determines that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, the terminal may send the obtained object information and distance information of the target object to the cloud server, so as to update the three-dimensional model data through the cloud server.
Specifically, the cloud server may create a virtual object corresponding to the target object according to the object information, for example, create a corresponding virtual object according to the length, width, color, material, attribute, and the like of the target object, and update the initial three-dimensional model data according to the created virtual object and the distance information.
And step S404, receiving the updated three-dimensional model data fed back by the server.
In this embodiment, after the cloud server updates the initial three-dimensional model data, the cloud server may generate corresponding update data and feed the update data back to the terminal, so that the terminal may update the initial three-dimensional model data.
In the present embodiment, the update data may be data relating to a virtual object that refers only to the created target object, or may also be the entire three-dimensional model data.
In this embodiment, when the update data only refers to data related to the created virtual object of the target object, the terminal may update the initial three-dimensional model data located at the terminal according to the position information and the obtained update data, that is, the terminal performs only incremental update. When the updating data is the whole three-dimensional model data, the terminal can directly update the initial three-dimensional model data at the terminal in a full amount according to the updating data.
In the embodiment, the object information and the distance information are sent to the server, and the object information and the distance information are used for instructing the server to create the virtual object corresponding to the target object according to the object information, update the initial three-dimensional model data according to the created virtual object and the distance information, and then receive the updated three-dimensional model data fed back by the server, so that the data processing amount of the terminal can be reduced, and the data processing efficiency of the terminal can be improved.
In one embodiment, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data may include: creating a virtual object corresponding to the target object based on the object information, and acquiring a target category label of the virtual object corresponding to the target object according to the corresponding relationship between the object information and the category label; and updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
The category label is a label for indicating a category of an object, and may include, but is not limited to, a person, a pet, or furniture.
In this embodiment, the terminal may acquire a correspondence between the object information and the category label of each object in advance, and then determine a target category label of the corresponding virtual object based on the object information. For example, the circular object may be a kettle or a bucket, when the object further includes a handle, the object may be determined as the kettle, and when the diameter of the object is greater than 30 cm, the object may be determined as the bucket, so that the person may obtain the target category label of the virtual object corresponding to the target object according to the object information and the corresponding relationship.
In this embodiment, the corresponding relationship between the object information and the category tag may also be created and generated through a cloud, and then sent to the terminal according to an acquisition request of the terminal.
Further, the terminal may update the initial three-dimensional model data according to the created virtual object, the target category label, and the distance information to obtain updated three-dimensional model data, so that the updated three-dimensional model data includes the target category label of the target object.
In this embodiment, when the corresponding virtual object is created by the cloud server and the initial three-dimensional model data is updated, the cloud server may obtain the target category tag of the virtual object corresponding to the target object according to the correspondence between the object information and the category tag, and update the initial three-dimensional model data according to the created virtual object, the target category tag, and the distance information.
Further, after the terminal obtains the updated three-dimensional model data, the cleaning route can be planned according to the updated three-dimensional model data, so that people, pets or furniture when corresponding objects are identified according to the target category labels in the three-dimensional model data, and whether the objects can be collided or not can be determined, so that the cleaning route can be planned.
In this embodiment, the terminal performs the cleaning route planning according to the updated three-dimensional model data, may perform the cleaning route model planning through a simulation program in the digital twin model space in fig. 3, and control the mobile device to perform the cleaning task after the path planning is completed.
In the above embodiment, the initial three-dimensional model data is updated according to the created virtual object, the target category label and the distance information to obtain the updated three-dimensional model data, so that the updated three-dimensional model data can include the category label, which is convenient for the self-moving device to perform subsequent path planning or automatically perform obstacle avoidance in the process of executing a cleaning task, and improves the safety.
In one embodiment, the method may further include: acquiring the acquisition time of a live-action image; and associating the acquisition time with the created virtual object corresponding to the target object.
Specifically, when acquiring the live-action image, the terminal may acquire the acquisition time of the live-action image correspondingly, and associate the acquired acquisition time with the created virtual object corresponding to the target object.
In one embodiment, referring to fig. 5, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data may include:
step S502, a virtual object corresponding to the target object is created according to the object information, and the created virtual object comprises the object orientation.
In this embodiment, the virtual object created by the terminal according to the object information may include an object orientation, for example, for a chair, the orientation of the chair is determined to be face-to-face with the self-moving device according to the object information, or for a kettle, the handle of the kettle is located on the left side in the view of the self-moving device, and so on.
Step S504, the orientation information of the mobile equipment when the live-action image is collected is obtained.
In this embodiment, the terminal may obtain orientation information of the self-moving device when the capturing device captures the live-action image, for example, facing south, north, east, west, or the like.
Step S506, determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the mobile device and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
Specifically, the terminal may determine the orientation of the virtual object in the three-dimensional model data, for example, if the orientation information of the self-moving device is facing a wall surface and the target object is facing the self-moving device, it may be determined that the target object is placed opposite to the wall surface.
Further, the terminal can update the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data
In the embodiment, by acquiring the orientation information of the mobile device when the live-action image is acquired and determining the orientation of the virtual object in the three-dimensional model data, the consistency between the three-dimensional model data and the entity space can be ensured, and the accuracy of the three-dimensional model data is further improved.
It should be understood that although the steps in the flowcharts of fig. 2, 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 2, 4, and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a three-dimensional model data updating apparatus including: the system comprises an initial three-dimensional model data acquisition module 100, a live-action image acquisition module 200, a distance information determination module 300, a judgment module 400 and an update module 500, wherein:
an initial three-dimensional model data obtaining module 100, configured to obtain initial three-dimensional model data of a physical space.
The live-action image acquisition module 200 is configured to acquire a live-action image of an entity space, where the live-action image includes a target object.
The distance information determining module 300 is configured to determine object information of a target object in the live-action image and distance information between the target object and the mobile device in the physical space according to the live-action image.
The determining module 400 is configured to determine whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information.
An updating module 500, configured to, when a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, create a virtual object corresponding to the target object according to the object information, and update the initial three-dimensional model data according to the created virtual object and the distance information, to obtain updated three-dimensional model data.
In one embodiment, the live view image may include a color image and depth data.
In this embodiment, the distance information determining module 300 may include:
and the object information generation submodule is used for extracting the characteristics of the color image in the live-action image to obtain the object information of the target object in the live-action image.
And the distance information determining submodule is used for calculating the distance information between the target object and the mobile equipment according to the depth data in the live-action image.
In one embodiment, the determining module 400 may include:
and the query submodule is used for querying whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information.
And the judging submodule is used for acquiring the relative position between the virtual object in the initial three-dimensional model data and the virtual self-moving equipment when the virtual object corresponding to the object information exists in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information.
And the determining submodule is used for determining that a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data when the relative position is inconsistent with the distance information.
In one embodiment, the apparatus may further include:
And the virtual target object determining module is used for determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data after the inquiry sub-module inquires whether the virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information.
In one embodiment, the update module 500 may include:
and the sending submodule is used for sending the object information and the distance information to the server, and the object information and the distance information are used for indicating the server to create a virtual object corresponding to the target object according to the object information and updating the initial three-dimensional model data according to the created virtual object and the distance information.
And the receiving submodule is used for receiving the updated three-dimensional model data fed back by the server.
In one embodiment, the update module 500 may include:
and the target category label obtaining sub-module is used for creating a virtual object corresponding to the target object based on the object information and obtaining a target category label of the virtual object corresponding to the target object according to the corresponding relation between the object information and the category label.
And the first updating submodule is used for updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
In one embodiment, the update module 500 may include:
and the virtual object creating submodule is used for creating a virtual object corresponding to the target object according to the object information, wherein the created virtual object comprises an object orientation.
And the orientation information acquisition sub-module is used for acquiring the orientation information of the mobile equipment when the live-action image is acquired.
And the second updating submodule is used for determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the self-moving equipment and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the apparatus may further include:
and the acquisition time acquisition module is used for acquiring the acquisition time of the live-action image.
And the association module is used for associating the acquisition time with the created virtual object corresponding to the target object.
For specific definition of the three-dimensional model data updating device, reference may be made to the above definition of the three-dimensional model data updating method, which is not described herein again. The respective modules in the above-described three-dimensional model data updating apparatus may be realized in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for storing data such as three-dimensional model data, live-action images and distance information. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a three-dimensional model data updating method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring initial three-dimensional model data of an entity space; acquiring a real image of a physical space, wherein the real image comprises a target object; determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image; judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information; and when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, creating the virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the live view image may include a color image and depth data.
In this embodiment, the determining, by the processor, the object information of the target object in the live-action image and the distance information between the target object and the mobile device in the physical space according to the live-action image may include: performing feature extraction on the color image in the live-action image to obtain object information of a target object in the live-action image; and calculating distance information between the target object and the self-moving equipment according to the depth data in the live-action image.
In one embodiment, the determining whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information when the processor executes the computer program may include: inquiring whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information; when a virtual object corresponding to the object information exists in the initial three-dimensional model data, acquiring a relative position between the virtual object and the virtual self-moving equipment in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information; and when the relative position is inconsistent with the distance information, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, after the processor executes the computer program to query whether a virtual object corresponding to the object information exists in the initial three-dimensional model data according to the object information, the following steps may be further implemented: and when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, the processor, when executing the computer program, is configured to create a virtual object corresponding to the target object according to the object information, and update the initial three-dimensional model data according to the created virtual object and the distance information, and may include: sending the object information and the distance information to a server, wherein the object information and the distance information are used for indicating the server to create a virtual object corresponding to the target object according to the object information and updating the initial three-dimensional model data according to the created virtual object and the distance information; and receiving the updated three-dimensional model data fed back by the server.
In one embodiment, when the processor executes the computer program, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data may include: creating a virtual object corresponding to the target object based on the object information, and acquiring a target category label of the virtual object corresponding to the target object according to the corresponding relationship between the object information and the category label; and updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
In one embodiment, when the processor executes the computer program, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data may include: creating a virtual object corresponding to the target object according to the object information, wherein the created virtual object comprises an object orientation; acquiring orientation information of the mobile equipment when the live-action image is acquired; and determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the self-moving equipment and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the acquisition time of a live-action image; and associating the acquisition time with the created virtual object corresponding to the target object.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring initial three-dimensional model data of an entity space; acquiring a live-action image of an entity space, wherein the live-action image comprises a target object; determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image; judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information; and when the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, creating the virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the live view image may include a color image and depth data.
In this embodiment, the determining, by the processor, object information of the target object in the live-action image and distance information between the target object and the mobile device in the physical space according to the live-action image may include: performing feature extraction on the color image in the live-action image to obtain object information of a target object in the live-action image; and calculating distance information between the target object and the self-moving equipment according to the depth data in the live-action image.
In one embodiment, the determining whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information may include: inquiring whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information; when a virtual object corresponding to the object information exists in the initial three-dimensional model data, acquiring a relative position between the virtual object and the virtual self-moving equipment in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information; and when the relative position is inconsistent with the distance information, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, after the computer program is executed by the processor to query whether a virtual object corresponding to the object information exists in the initial three-dimensional model data according to the object information, the following steps may be further implemented: and when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, determining that the virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
In one embodiment, the computer program when executed by the processor for creating a virtual object corresponding to the target object based on the object information and updating the initial three-dimensional model data based on the created virtual object and the distance information may include: sending the object information and the distance information to a server, wherein the object information and the distance information are used for indicating the server to create a virtual object corresponding to the target object according to the object information and updating initial three-dimensional model data according to the created virtual object and the distance information; and receiving the updated three-dimensional model data fed back by the server.
In one embodiment, the computer program, when executed by the processor, implements creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data, and may include: creating a virtual object corresponding to the target object based on the object information, and acquiring a target category label of the virtual object corresponding to the target object according to the corresponding relationship between the object information and the category label; and updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
In one embodiment, the computer program, when executed by the processor, implements creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data, and may include: creating a virtual object corresponding to the target object according to the object information, wherein the created virtual object comprises an object orientation; acquiring orientation information of the mobile equipment when the live-action image is acquired; and determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the self-moving equipment and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring the acquisition time of a live-action image; and associating the acquisition time with the created virtual object corresponding to the target object.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A method for updating three-dimensional model data, the method comprising:
acquiring initial three-dimensional model data of an entity space;
acquiring a live-action image of the entity space, wherein the live-action image comprises a target object;
according to the live-action image, determining object information of a target object in the live-action image and distance information between the target object and the mobile equipment in an entity space;
Judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information;
when the initial three-dimensional model data does not have a virtual target object corresponding to the target object, creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data;
the live-action image comprises a color image and depth data;
the determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the physical space according to the live-action image includes:
performing feature extraction on the color image in the live-action image to obtain object information of a target object in the live-action image;
and calculating distance information between the target object and the self-moving equipment according to the depth data in the live-action image.
2. The method according to claim 1, wherein the determining whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data according to the object information and the distance information comprises:
Inquiring whether a virtual object corresponding to the object information exists in the initial three-dimensional model data or not according to the object information;
when a virtual object corresponding to the object information exists in the initial three-dimensional model data, acquiring a relative position between the virtual object and a virtual self-moving device in the initial three-dimensional model data, and judging whether the relative position is consistent with the distance information;
and when the relative position is inconsistent with the distance information, determining that a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
3. The method according to claim 2, wherein after querying whether a virtual object corresponding to the object information exists in the initial three-dimensional model data according to the object information, the method further comprises:
and when the virtual object corresponding to the object information does not exist in the initial three-dimensional model data, determining that a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data.
4. The method of claim 1, wherein the creating a virtual object corresponding to the target object based on the object information and updating the initial three-dimensional model data based on the created virtual object and the distance information comprises:
Sending the object information and the distance information to a server, wherein the object information and the distance information are used for instructing the server to create a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information;
and receiving the updated three-dimensional model data fed back by the server.
5. The method according to claim 1, wherein the creating a virtual object corresponding to the target object according to the object information, and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data comprises:
creating a virtual object corresponding to the target object based on the object information, and acquiring a target category label of the virtual object corresponding to the target object according to a corresponding relationship between the object information and the category label;
and updating the initial three-dimensional model data according to the created virtual object, the target class label and the distance information to obtain updated three-dimensional model data.
6. The method according to claim 1, wherein the creating a virtual object corresponding to the target object according to the object information and updating the initial three-dimensional model data according to the created virtual object and the distance information to obtain updated three-dimensional model data comprises:
Creating a virtual object corresponding to the target object according to the object information, wherein the created virtual object comprises an object orientation;
acquiring orientation information of the mobile device when the live-action image is acquired;
and determining the orientation of the virtual object in the three-dimensional model data according to the orientation information of the self-moving equipment and the object orientation of the virtual object, and updating the initial three-dimensional model data according to the virtual object and the distance information to obtain updated three-dimensional model data.
7. The method of claim 1, further comprising:
acquiring the acquisition time of the live-action image;
and associating the acquisition time with the created virtual object corresponding to the target object.
8. A three-dimensional model data updating apparatus, comprising:
the initial three-dimensional model data acquisition module is used for acquiring initial three-dimensional model data of the entity space;
the real-scene image acquisition module is used for acquiring a real-scene image of the entity space, wherein the real-scene image comprises a target object;
the distance information determining module is used for determining object information of a target object in the live-action image and distance information between the target object and the mobile device in the entity space according to the live-action image;
The judging module is used for judging whether a virtual target object corresponding to the target object exists in the initial three-dimensional model data or not according to the object information and the distance information;
an updating module, configured to, when a virtual target object corresponding to the target object does not exist in the initial three-dimensional model data, create a virtual object corresponding to the target object according to the object information, and update the initial three-dimensional model data according to the created virtual object and the distance information, to obtain updated three-dimensional model data;
the live-action image comprises a color image and depth data;
the distance information determination module includes:
the object information generation submodule is used for extracting the characteristics of the color image in the live-action image to obtain the object information of the target object in the live-action image;
and the distance information determining submodule is used for calculating the distance information between the target object and the mobile device according to the depth data in the live-action image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011184844.8A 2020-10-29 2020-10-29 Three-dimensional model data updating method, device, computer equipment and storage medium Active CN112348944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011184844.8A CN112348944B (en) 2020-10-29 2020-10-29 Three-dimensional model data updating method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011184844.8A CN112348944B (en) 2020-10-29 2020-10-29 Three-dimensional model data updating method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112348944A CN112348944A (en) 2021-02-09
CN112348944B true CN112348944B (en) 2022-06-28

Family

ID=74355792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011184844.8A Active CN112348944B (en) 2020-10-29 2020-10-29 Three-dimensional model data updating method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112348944B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656862A (en) * 2021-07-20 2021-11-16 中建科工集团有限公司 Drawing data updating processing method, device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN111768496A (en) * 2017-08-24 2020-10-13 Oppo广东移动通信有限公司 Image processing method, image processing device, server and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747680B2 (en) * 2013-11-27 2017-08-29 Industrial Technology Research Institute Inspection apparatus, method, and computer program product for machine vision inspection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768496A (en) * 2017-08-24 2020-10-13 Oppo广东移动通信有限公司 Image processing method, image processing device, server and computer-readable storage medium
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image

Also Published As

Publication number Publication date
CN112348944A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN111536964B (en) Robot positioning method and device, and storage medium
US20220057212A1 (en) Method for updating a map and mobile robot
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
CN108297115B (en) Autonomous repositioning method for robot
CN111429574A (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN111256687A (en) Map data processing method and device, acquisition equipment and storage medium
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN112336254B (en) Cleaning strategy generation method and device for sweeping robot, computer equipment and medium
CN111198378B (en) Boundary-based autonomous exploration method and device
WO2022016311A1 (en) Point cloud-based three-dimensional reconstruction method and apparatus, and computer device
KR101207535B1 (en) Image-based simultaneous localization and mapping for moving robot
CN111143489B (en) Image-based positioning method and device, computer equipment and readable storage medium
CN111179274A (en) Map ground segmentation method, map ground segmentation device, computer equipment and storage medium
CN112348944B (en) Three-dimensional model data updating method, device, computer equipment and storage medium
CN113111144A (en) Room marking method and device and robot movement method
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN111726591B (en) Map updating method, map updating device, storage medium and electronic equipment
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium
CN112200907B (en) Map data generation method and device for sweeping robot, computer equipment and medium
CN114935341B (en) Novel SLAM navigation computation video identification method and device
CN112506182B (en) Floor sweeping robot positioning method and device, computer equipment and storage medium
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN114459483B (en) Landmark navigation map construction and application method and system based on robot navigation
CN114425774A (en) Method and apparatus for recognizing walking path of robot, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant