CN117830559A - Live working scene modeling method and device, storage medium and computer equipment - Google Patents

Live working scene modeling method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN117830559A
CN117830559A CN202410018850.8A CN202410018850A CN117830559A CN 117830559 A CN117830559 A CN 117830559A CN 202410018850 A CN202410018850 A CN 202410018850A CN 117830559 A CN117830559 A CN 117830559A
Authority
CN
China
Prior art keywords
data
target
environment model
dimensional
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410018850.8A
Other languages
Chinese (zh)
Inventor
王毅
曲烽瑞
王喜军
张子翀
葛佳菲
史东谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202410018850.8A priority Critical patent/CN117830559A/en
Publication of CN117830559A publication Critical patent/CN117830559A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The live working scene modeling method, device, storage medium and computer equipment provided by the application, wherein the method comprises the following steps: acquiring laser data of a target scene at the current moment; performing data cleaning on the laser data to obtain three-dimensional data; if a three-dimensional environment model corresponding to the target scene exists, extracting edge characteristic points and plane characteristic points in the three-dimensional data; and matching each edge characteristic point and each plane characteristic point with the edge characteristic points and the plane characteristic points in the three-dimensional environment model respectively so as to update the three-dimensional environment model and obtain a new three-dimensional environment model. The laser data is acquired in a laser radar scanning mode, the laser data is subjected to data cleaning, so that the precision of scene modeling is improved, when the three-dimensional environment model exists, the three-dimensional environment model is updated by utilizing the currently acquired laser data, so that the three-dimensional environment model can be updated in real time, the instantaneity of the three-dimensional environment model is improved, and meanwhile, the precision of the scene modeling is further improved.

Description

Live working scene modeling method and device, storage medium and computer equipment
Technical Field
The present disclosure relates to the field of three-dimensional modeling technologies, and in particular, to a live working scene modeling method, apparatus, storage medium, and computer device.
Background
Environmental three-dimensional modeling is an important research direction in the fields of modern robotics and computer vision. It aims to build a digital representation of the real world environment by collecting and processing sensor data. With advances in computer technology and algorithms, three-dimensional modeling has evolved rapidly in recent years. Modern environmental three-dimensional modeling systems typically use a variety of sensors, such as cameras, IMUs (inertial measurement units), etc., to obtain more accurate, comprehensive environmental information.
Currently, due to the complexity of the live working environment itself, and dynamic changes, such as equipment movement, personnel ingress and egress, etc., often exist on the live working site, the accuracy of the resulting model corresponding to the live working environment is low.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks, especially the technical drawbacks of the prior art that the accuracy of modeling the live working environment is low due to the complexity of the live working environment itself and dynamic changes, such as equipment movement, personnel access, etc., often exist in the live working site.
In a first aspect, the present application provides a live working scene modeling method, the method including:
acquiring laser data of a target scene at the current moment;
performing data cleaning on the laser data to obtain three-dimensional data of the target scene;
judging whether a three-dimensional environment model corresponding to the target scene exists or not;
if a three-dimensional environment model corresponding to the target scene exists, extracting edge feature points and plane feature points in the three-dimensional data;
respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result;
respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result;
and updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model.
In one embodiment, the method further comprises:
if the three-dimensional environment model corresponding to the target scene does not exist, carrying out target recognition on the three-dimensional data to obtain tag data corresponding to the three-dimensional data; wherein the tag data comprises object types of the objects in the three-dimensional data;
And constructing a three-dimensional environment model corresponding to the target scene according to the three-dimensional data and the tag data.
In one embodiment, the performing object recognition on the three-dimensional data includes:
acquiring shape information and a motion mode of each target in the three-dimensional data;
inputting the shape information and the motion mode of each target in the three-dimensional data into a preset target identification model to obtain a target type corresponding to each target; the object recognition model is used for receiving shape information and motion modes of a plurality of objects, matching the shape information and the motion models of each object, and outputting object types matched with the shape information and the motion modes of each object.
In one embodiment, the constructing a three-dimensional environment model corresponding to the target scene includes:
acquiring point cloud data in the three-dimensional data;
constructing an initial environment model according to the point cloud data and the tag data;
and carrying out data fusion on point cloud data corresponding to each target of the initial environment model, and determining the initial environment model subjected to data fusion as the three-dimensional environment model.
In one embodiment, the performing data cleaning on the laser data includes:
acquiring a point grade evaluation system; the point grade evaluation system comprises a plurality of target categories and evaluation rules corresponding to each target category;
scoring the performance of each point in the laser data on each target class based on the evaluation rule corresponding to each target class to obtain class scores of each point on each target class;
determining a point with a category score which does not meet the corresponding preset condition as a target point;
and deleting each target point from the laser data, and determining the laser data after deleting the target points as the three-dimensional data.
In one embodiment, the matching each edge feature point with an edge feature point in the three-dimensional environment model includes:
for each edge feature point, selecting a plurality of edge feature points closest to the edge feature point in the three-dimensional environment model;
determining residual errors between the edge characteristic points and line segments formed by the selected edge characteristic points;
when determining the residual error corresponding to each edge feature point, determining the matching information of each edge feature point according to the residual error corresponding to each edge feature point, and generating a first matching result according to the matching information of each edge feature point.
In one embodiment, the matching each planar feature point with a planar feature point in the three-dimensional environment model includes:
for each plane characteristic point, selecting a plurality of plane characteristic points closest to the plane characteristic point from the three-dimensional environment model;
determining a residual error between the plane characteristic point and a plane formed by the selected plane characteristic point;
when determining the residual error corresponding to each planar feature point, determining the matching information of each planar feature point according to the residual error corresponding to each planar feature point, and generating a second matching result according to the matching information of each planar feature point.
In a second aspect, the present application provides a live working scene modeling apparatus, the apparatus comprising:
the data acquisition module is used for acquiring laser data of a target scene at the current moment;
the data processing module is used for carrying out data cleaning on the laser data to obtain three-dimensional data of the target scene;
the model judging module is used for judging whether a three-dimensional environment model corresponding to the target scene exists or not;
the feature point extraction module is used for extracting edge feature points and plane feature points in the three-dimensional data if a three-dimensional environment model corresponding to the target scene exists;
The first matching module is used for respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result;
the second matching module is used for respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result;
and the model updating module is used for updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model.
In a third aspect, the present application provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of a live-job scenario modeling method as described in any one of the embodiments above.
In a fourth aspect, the present application provides a computer device comprising: one or more processors, and memory;
the memory has stored therein computer readable instructions which, when executed by the one or more processors, perform the steps of the live working scenario modeling method of any one of the embodiments described above.
From the above technical solutions, the embodiments of the present application have the following advantages:
the application provides a live working scene modeling method, a live working scene modeling device, a storage medium and computer equipment, wherein the live working scene modeling method comprises the following steps: acquiring laser data of a target scene at the current moment; performing data cleaning on the laser data to obtain three-dimensional data of a target scene; when a three-dimensional environment model corresponding to the target scene exists, extracting edge characteristic points and plane characteristic points in the three-dimensional data; respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result, and respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result; and updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model. The laser data are acquired in a laser radar scanning mode, and then the laser data are subjected to data cleaning, so that the accuracy of scene modeling is improved, and under the condition that a three-dimensional environment model exists, the three-dimensional environment model is updated by utilizing the currently acquired laser data, so that the three-dimensional environment model can be updated in real time, the dynamic change of the environment can be responded quickly, the instantaneity of the three-dimensional environment model is improved, and meanwhile, the accuracy of scene modeling can be further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of modeling a live working scene according to an embodiment of the present application;
fig. 2 is a schematic flow chart of constructing a three-dimensional environment model corresponding to a target scene according to an embodiment of the present application;
fig. 3 is a schematic flow chart of data cleaning on laser data according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a live working scene modeling apparatus according to an embodiment of the present application;
fig. 5 is an internal structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In one embodiment, the application provides a live working scene modeling method, and the following embodiment is applied to a server for explanation. It is understood that the live-line scenario modeling method may be a single server or may be a server cluster composed of a plurality of servers, which is not particularly limited in this application.
As shown in fig. 1, the present application provides a live working scene modeling method, which includes:
step S101: and acquiring laser data of the target scene at the current moment.
The target scene refers to a scene which is required by a user to be modeled. Laser data refers to data containing three-dimensional spatial information acquired by a laser scanning apparatus. The laser scanning device emits a laser beam and measures the distance between the laser beam and the object, thereby obtaining information such as the position and shape of the object. It can be appreciated that since the laser data is collected based on the laser scanning device, the laser data has high measurement accuracy and real-time performance.
In this step, when the user needs to know the information such as the geometric shape, the structure, etc. of each object in a certain scene to perform the job, the modeling needs to be performed on the scene to ensure the safety and the completion degree of the job, and the modeling needs to obtain the laser data of the scene. For example, when a live-line scenario needs to be modeled, the location of the hardware device may be set and a modeling instruction initiated. When the server receives the modeling instruction, the hardware equipment is controlled to be started and adjusted to a proper visual angle for shooting, and then laser data of a target scene at the current moment is obtained. It is understood that hardware devices include, but are not limited to, laser scanning devices, cameras, and the like.
Step S102: and performing data cleaning on the laser data to obtain three-dimensional data of the target scene.
It is understood that data cleansing refers to processing collected data to remove erroneous, inconsistent or invalid data, ensuring data quality and availability. The three-dimensional data refers to laser data subjected to data cleaning.
Specifically, the laser scanning apparatus emits a laser beam during scanning and measures a distance between the laser beam and the object, thereby obtaining information such as a position and a shape of the object, that is, laser data. This information is represented in the form of a point cloud, i.e. the object surface is represented discretely as a series of points, each containing the three-dimensional coordinates and other attributes of the point. But may contain some points in the laser data that are inaccurate, invalid, or anomalous due to environmental interference, equipment errors, or other factors. These points may be outliers, noise points, missing points, etc. Therefore, the data cleansing is performed to remove these erroneous and invalid points to obtain accurate three-dimensional data corresponding to the target scene. The geometry, structure and characteristics of the target scene can be better described through the cleaned laser data, and a reliable data base is provided for subsequent analysis, modeling and application.
Step S103: and judging whether a three-dimensional environment model corresponding to the target scene exists or not.
The three-dimensional environment model refers to that the environment and the scene in the real world are presented in the form of a three-dimensional mathematical model through the technical means of acquisition, processing, modeling and the like. The three-dimensional environmental model may contain information on the appearance, structure, texture, illumination, dynamic changes, etc. of the environment.
It can be understood that, by determining whether or not there is a three-dimensional environment model corresponding to the target scene, it is determined whether the laser data corresponding to the target scene acquired in step S101 is used for modeling or updating the three-dimensional environment model corresponding to the currently existing target scene.
Step S104: and if the three-dimensional environment model corresponding to the target scene exists, extracting edge characteristic points and plane characteristic points in the three-dimensional data.
The edge feature points refer to feature points representing the boundary of an object or the edge of an area in the image or the point cloud data. Edge feature points are typically located in the transition region between two different colors, brightnesses or textures, with significant variation in surrounding pixel values. The planar feature points refer to feature points representing a flat surface or an approximate plane in the image or point cloud data. The planar feature points are typically located in a flat area of the object surface with less variation in pixel values around it.
In this step, when a three-dimensional environment model corresponding to the target scene exists, the three-dimensional environment model corresponding to the target scene is updated by using the laser data (three-dimensional data) after the data cleaning. In this case, it is necessary to extract feature points from the three-dimensional data. To obtain edge feature points and plane feature points.
Further, the edge feature points and the plane feature points in the three-dimensional data can be extracted by using corresponding feature point detection algorithms and technologies, and the edge feature points and the plane feature points can be detected in the three-dimensional data. For edge feature points, algorithms based on principles of curvature, normal variation, depth variation, etc. may be used, for example; for the planar feature points, algorithms based on principles of normal consistency, RANSAC, etc. may be used. Algorithms commonly used to detect planar feature points and edge feature points include, but are not limited to SIFT feature point detection, NARF feature point detection, etc., which are not particularly limited in this application.
Step S105: and respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result.
In this step, feature point matching is performed on each edge feature point in the three-dimensional data and the edge feature point in the three-dimensional environment model, respectively. It can be understood that, for each edge feature point in the three-dimensional data, the edge feature point is matched with an edge feature point in the three-dimensional environment model to obtain matching information corresponding to the edge feature point, and when each edge feature point in the three-dimensional data is matched to obtain corresponding matching information, a first matching result is generated according to the matching information corresponding to each edge feature point in the three-dimensional data.
It can be appreciated that the matching information of the edge feature points includes the similarity between the edge feature points and part of the edge feature points in the three-dimensional environment model. And then the corresponding relation between the edge characteristic points in the three-dimensional data and the edge characteristic points in the three-dimensional environment model can be determined according to the matching information corresponding to each edge characteristic point in the three-dimensional data.
Step S106: and respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result.
The description of this step can be found in the description of step S105.
Step S107: and updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model.
The first matching result comprises matching information corresponding to each edge feature point in the three-dimensional data. The second matching result comprises matching information corresponding to each plane characteristic point in the three-dimensional data.
In this step, for each edge feature point in the three-dimensional data, the position of the feature point corresponding to the edge feature point in the three-dimensional environment model is updated according to the matching information corresponding to each edge feature point. And updating the planar model in which the feature points corresponding to the planar feature points are located in the three-dimensional environment model according to the matching information corresponding to each planar feature point. And finally, updating the three-dimensional environment model to obtain a new three-dimensional environment model.
It can be understood that when there is a three-dimensional environment model corresponding to the target scene, the laser data of the target scene obtained at each moment is used for updating the three-dimensional environment model, and, for example, assuming that the three-dimensional environment model a corresponding to the target scene at the current moment exists, the laser data B of the target scene obtained at the next moment is used for updating the three-dimensional environment model a to obtain a three-dimensional environment model C, and the laser data D of the target scene obtained at the next moment is used for updating the three-dimensional environment model C to obtain a three-dimensional environment model D. When a user needs to use the three-dimensional environment model, the user only needs to acquire the latest three-dimensional environment model. Therefore, the three-dimensional environment model can be updated in real time, the dynamic change of the environment can be responded quickly, the real-time performance of the three-dimensional environment model is improved, and meanwhile, the accuracy of scene modeling can be further improved due to the accuracy of laser data.
Step S108: and if the three-dimensional environment model corresponding to the target scene does not exist, carrying out target recognition on the three-dimensional data to obtain tag data corresponding to the three-dimensional data.
Wherein the tag data includes object types of respective objects in the three-dimensional data. It is understood that each object in the three-dimensional data refers to an object in the target scene, such as a power plant, a person, a terrain, etc.
In the step, when the three-dimensional environment model corresponding to the target scene does not exist, target recognition is performed on the three-dimensional data so as to obtain tag data corresponding to the three-dimensional data, and the three-dimensional data and the tag data are used as data bases for constructing the three-dimensional environment model.
Step S109: and constructing a three-dimensional environment model corresponding to the target scene according to the three-dimensional data and the tag data.
It can be appreciated that the three-dimensional environment model corresponding to the target scene is obtained by analyzing the three-dimensional data and the tag data, such as feature extraction, and modeling based on the three-dimensional data and the tag data.
The application provides a live working scene modeling method, a live working scene modeling device, a storage medium and computer equipment, wherein the live working scene modeling method comprises the following steps: acquiring laser data of a target scene at the current moment; performing data cleaning on the laser data to obtain three-dimensional data of a target scene; when a three-dimensional environment model corresponding to the target scene exists, extracting edge characteristic points and plane characteristic points in the three-dimensional data; respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result, and respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result; and updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model. The laser data are acquired in a laser radar scanning mode, and then the laser data are subjected to data cleaning, so that the accuracy of scene modeling is improved, and under the condition that a three-dimensional environment model exists, the three-dimensional environment model is updated by utilizing the currently acquired laser data, so that the three-dimensional environment model can be updated in real time, the dynamic change of the environment can be responded quickly, the instantaneity of the three-dimensional environment model is improved, and meanwhile, the accuracy of scene modeling can be further improved.
In one embodiment, performing object recognition on three-dimensional data includes:
acquiring shape information and a motion mode of each target in the three-dimensional data;
inputting the shape information and the motion mode of each target in the three-dimensional data into a preset target recognition model to obtain a target type corresponding to each target; the object recognition model is used for receiving shape information and motion modes of a plurality of objects, matching the shape information and the motion modes of each object, and outputting object types matched with the shape information and the motion modes of each object.
The shape information refers to the feature of the appearance outline, boundary or geometric shape of the object. This information can be used to distinguish between different object types, helping the computer system to detect and classify objects from the image or sensor data. The motion pattern refers to a motion law or pattern of a target in time and space. By analyzing the motion pattern of the target, the computer system can be assisted in tracking, predicting and understanding the target. .
In this embodiment, shape information and a motion pattern of each target are obtained from three-dimensional data, so that the type of each target is identified by using a target identification model, and a target type corresponding to each target is obtained. Specifically, the training process of the target recognition model may use shape information and motion patterns of a plurality of targets as a training data set, and the target type of each target as a test data set, so as to iteratively train the target recognition model, and use the finally obtained target recognition model as the target model used in the embodiment.
It can be appreciated that the tag data is obtained through object recognition, so that the three-dimensional environment model constructed according to the tag data can accurately locate and track the object. And better understanding and reasoning about the objects in the object scene. The accuracy of scene modeling is further improved.
As shown in fig. 2, in one embodiment, constructing a three-dimensional environment model corresponding to a target scene includes:
step S201: and acquiring point cloud data in the three-dimensional data.
Wherein the point cloud data is a data set consisting of a large number of three-dimensional points for describing the geometry and spatial position of the object surface.
Step S202: and constructing an initial environment model according to the point cloud data and the label data.
In this step, feature extraction is performed on the point cloud data and the tag data, for example, features of a floor, a wall, an object, and the like are extracted, and a three-dimensional map, that is, an initial environment model is constructed based on the features extracted in the feature extraction.
Step S203: and carrying out data fusion on point cloud data corresponding to each target of the initial environment model, and determining the initial environment model subjected to data fusion as a three-dimensional environment model.
In this step, the point cloud data corresponding to each target may be subjected to data fusion. Such fusion may be a simple weighted average, or may use more complex algorithms such as nearest neighbor interpolation, gaussian mixture model, etc., which are not particularly limited in this application. The data fusion is performed to integrate point cloud data acquired from different viewing angles or sensors to provide more comprehensive spatial information. And then determining the initial environment model subjected to data fusion as a three-dimensional environment model.
As shown in fig. 3, in one embodiment, performing data cleaning on laser data includes:
step S301: and obtaining a point grade evaluation system.
The point grade evaluation system comprises a plurality of target categories and evaluation rules corresponding to the target categories.
It will be appreciated that the point level rating system may be adjusted and updated, meaning that the latest point level rating system is acquired when the point level rating system is acquired is executed.
Step S302: and scoring the performance of each point in the laser data on each target class based on the evaluation rule corresponding to each target class, and obtaining the class score of each point on each target class.
In this step, when the evaluation rule corresponding to each target category is obtained, the performance of each point in the laser data on each target category may be scored, so as to obtain the category score corresponding to each point in the laser data on each target category.
Step S303: and determining the point with the category score not meeting the corresponding preset condition as a target point.
Step S304: and deleting each target point from the laser data, and determining the laser data after deleting the target points as three-dimensional data.
In this embodiment, for each point in the laser data, if the category score of the point does not satisfy the preset condition of the corresponding target category, the point is determined as the target point. And then deleting the target point from the laser data, and determining the deleted laser data as three-dimensional data. The preset condition is a condition for judging a class score of the corresponding target class. Specifically, each target category has a corresponding preset condition, for example, the corresponding category score is greater than a preset threshold, the corresponding category score is less than the preset threshold, the corresponding category score is greater than the preset threshold after operation of a preset formula or a preset rule, and the like. The setting may be specifically performed according to the type of the target category, the evaluation rule, and the like, which is not particularly limited in this application.
For example, there are a target class a and a target class B in the point class evaluation system, the class scores of the point C in the laser data on the target class a and the target class B are 5 and 10, respectively, and assuming that the preset condition corresponding to the target class a is that the class score corresponding to the target class a is greater than 8, the preset condition corresponding to the target class B is that the preset condition corresponding to the target class B is greater than 9, at this time, the class score of the point C does not satisfy the preset condition of the corresponding target class (the preset condition corresponding to the target class a), and therefore, the point C is determined as the target point.
It can be understood that each point in the laser data is evaluated through a plurality of target categories, and when a category score of a certain point in a certain target category does not meet a preset condition corresponding to the target category, the point is removed, so that the laser data after data cleaning, namely the three-dimensional data, is obtained. Thus, noise and interference of stray points can be reduced, and the quality of three-dimensional data can be improved. Thereby improving the precision of scene modeling.
In one embodiment, matching each edge feature point with an edge feature point in the three-dimensional environment model includes:
selecting a plurality of edge feature points closest to the edge feature points in the three-dimensional environment model for each edge feature point;
determining residual errors between the edge characteristic points and line segments formed by the selected edge characteristic points;
when determining the residual error corresponding to each edge feature point, determining the matching information of each edge feature point according to the residual error corresponding to each edge feature point, and generating a first matching result according to the matching information of each edge feature point.
Wherein, the residual refers to the difference or error between the observed value and the predicted value. It represents the deviation between the actual observations and the model predictions.
In this embodiment, by selecting a plurality of edge feature points closest to the edge feature points in the three-dimensional environment model, residuals between the edge feature points and line segments formed by the edge feature points selected in the three-dimensional environment model are calculated. When the residual error corresponding to each edge feature point is obtained, the matching information of each edge feature point is determined according to the residual error corresponding to each edge feature point, and then a first matching result is obtained.
Further, according to the residual error corresponding to each edge feature point, the matching information of each edge feature point can be determined by a similarity measurement method, for example, a gaussian distribution model based on the residual error, and the like.
In one embodiment, for each edge feature point, 5 edge feature points closest to the edge feature point may be selected in the three-dimensional environment model.
In one embodiment, matching each planar feature point with a planar feature point in the three-dimensional environmental model includes:
for each plane characteristic point, selecting a plurality of plane characteristic points closest to the plane characteristic point in the three-dimensional environment model;
determining a residual error between the plane characteristic point and a plane formed by the selected plane characteristic point;
When determining the residual error corresponding to each planar feature point, determining the matching information of each planar feature point according to the residual error corresponding to each planar feature point, and generating a second matching result according to the matching information of each planar feature point.
In this embodiment, by selecting a plurality of planar feature points closest to the planar feature points in the three-dimensional environment model, residuals between the planar feature points and a plane formed by the planar feature points selected in the three-dimensional environment model are calculated. When the residual error corresponding to each plane characteristic point is obtained, the matching information of each plane characteristic point is determined according to the residual error corresponding to each plane characteristic point, and then a second matching result is obtained.
Further, according to the residual error corresponding to each planar feature point, the matching information of each planar feature point can be determined by a similarity measurement method, for example, a gaussian distribution model based on the residual error, and the like.
In one embodiment, for each planar feature point, 5 planar feature points closest to the planar feature point may be selected in the three-dimensional environment model.
It can be understood that by matching the planar feature points and the edge feature points in the three-dimensional data with the feature points in the three-dimensional environment model, the three-dimensional environment model can be updated, so that the real-time dynamic update of the three-dimensional environment model corresponding to the target scene can be realized. And further, the dynamic change of the environment is responded quickly, and the real-time performance of the three-dimensional environment model is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The following describes a live-line scenario modeling apparatus provided in an embodiment of the present application, and the live-line scenario modeling apparatus described below and the live-line scenario modeling method described above may be referred to correspondingly to each other.
As shown in fig. 4, the present application provides a live working scenario modeling apparatus 400, the apparatus including:
A data acquisition module 401, configured to acquire laser data of a target scene at a current moment;
the data processing module 402 is configured to perform data cleaning on the laser data to obtain three-dimensional data of the target scene;
a model judging module 403, configured to judge whether a three-dimensional environment model corresponding to the target scene exists;
the feature point extraction module 404 is configured to extract edge feature points and plane feature points in the three-dimensional data if a three-dimensional environment model corresponding to the target scene exists;
the first matching module 405 is configured to match each edge feature point with an edge feature point in the three-dimensional environment model, so as to obtain a first matching result;
a second matching module 406, configured to match each planar feature point with a planar feature point in the three-dimensional environment model, so as to obtain a second matching result;
the model updating module 407 is configured to update the three-dimensional environment model according to the first matching result and the second matching result, so as to obtain a new three-dimensional environment model.
In one embodiment, the live working scene modeling apparatus further includes:
the target recognition module is used for carrying out target recognition on the three-dimensional data if the three-dimensional environment model corresponding to the target scene does not exist, so as to obtain tag data corresponding to the three-dimensional data; the label data comprises target types of targets in the three-dimensional data;
The model construction module is used for constructing a three-dimensional environment model corresponding to the target scene according to the three-dimensional data and the tag data.
In one embodiment, the object recognition module includes:
the information acquisition sub-module is used for acquiring shape information and motion modes of each target in the three-dimensional data;
the target recognition sub-module is used for inputting the shape information and the motion mode of each target in the three-dimensional data into a preset target recognition model to obtain a target type corresponding to each target; the object recognition model is used for receiving shape information and motion modes of a plurality of objects, matching the shape information and the motion modes of each object, and outputting object types matched with the shape information and the motion modes of each object.
In one embodiment, the model building module comprises:
the point cloud data acquisition sub-module is used for acquiring point cloud data in the three-dimensional data;
the model construction submodule is used for constructing an initial environment model according to the point cloud data and the tag data;
the model determining sub-module is used for carrying out data fusion on point cloud data corresponding to each target of the initial environment model and determining the initial environment model subjected to data fusion as a three-dimensional environment model.
In one embodiment, a data processing module includes:
the system acquisition sub-module is used for acquiring a point grade evaluation system; the point grade evaluation system comprises a plurality of target categories and evaluation rules corresponding to each target category;
the class score determining sub-module is used for scoring the performance of each point in the laser data on each target class based on the evaluation rule corresponding to each target class to obtain the class score of each point on each target class;
the target point determining submodule is used for determining a point with a category score which does not meet the corresponding preset condition as a target point;
and the three-dimensional data determining sub-module is used for deleting each target point from the laser data and determining the laser data after deleting the target points as three-dimensional data.
In one embodiment, the first matching module includes:
the first feature point selecting sub-module is used for selecting a plurality of edge feature points closest to each edge feature point in the three-dimensional environment model;
a first residual determination submodule, configured to determine a residual between the edge feature point and a line segment formed by the selected edge feature point;
And the first result determining submodule is used for determining the matching information of each edge feature point according to the residual error corresponding to each edge feature point when determining the residual error corresponding to each edge feature point, and generating a first matching result according to the matching information of each edge feature point.
In one embodiment, the second matching module includes:
the second feature point selecting sub-module is used for selecting a plurality of plane feature points closest to the plane feature points in the three-dimensional environment model for each plane feature point;
a second residual determination submodule, configured to determine a residual between the plane feature point and a plane formed by the selected plane feature point;
and the second result determining submodule is used for determining the matching information of each plane characteristic point according to the residual error corresponding to each plane characteristic point when determining the residual error corresponding to each plane characteristic point, and generating a second matching result according to the matching information of each plane characteristic point.
The division of the modules in the live view scene modeling apparatus is merely for illustration, and in other embodiments, the live view scene modeling apparatus may be divided into different modules as needed to perform all or part of the functions of the live view scene modeling apparatus. The modules in the live working scene modeling device can be fully or partially implemented by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, the present application also provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the live-job scene modeling method as set forth in any of the above embodiments.
In one embodiment, the present application further provides a computer device having stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the live-job scene modeling method as set forth in any of the above embodiments.
Schematically, as shown in fig. 5, fig. 5 is a schematic internal structure of a computer device according to an embodiment of the present application, and the computer device 500 may be provided as a server. Referring to FIG. 5, a computer device 500 includes a processing component 502 that further includes one or more processors and memory resources represented by memory 501 for storing instructions, such as applications, executable by the processing component 502. The application program stored in the memory 501 may include one or more modules each corresponding to a set of instructions. Further, the processing component 502 is configured to execute instructions to perform the live-job scenario modeling method of any of the embodiments described above.
The computer device 500 may also include a power supply component 503 configured to perform power management of the computer device 500, a wired or wireless network interface 504 configured to connect the computer device 500 to a network, and an input output (I/O) interface 505. The computer device 500 may operate based on an operating system stored in memory 501, such as Windows Server TM, mac OS XTM, unix TM, linux TM, free BSDTM, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes," and/or "having," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof, and include any and all combinations of the listed items.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A live working scene modeling method, the method comprising:
acquiring laser data of a target scene at the current moment;
performing data cleaning on the laser data to obtain three-dimensional data of the target scene;
judging whether a three-dimensional environment model corresponding to the target scene exists or not;
if a three-dimensional environment model corresponding to the target scene exists, extracting edge feature points and plane feature points in the three-dimensional data;
Respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result;
respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result;
and updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model.
2. The live working scenario modeling method of claim 1, further comprising:
if the three-dimensional environment model corresponding to the target scene does not exist, carrying out target recognition on the three-dimensional data to obtain tag data corresponding to the three-dimensional data; wherein the tag data comprises object types of the objects in the three-dimensional data;
and constructing a three-dimensional environment model corresponding to the target scene according to the three-dimensional data and the tag data.
3. The live working scene modeling method as defined in claim 2, wherein the performing object recognition on the three-dimensional data includes:
acquiring shape information and a motion mode of each target in the three-dimensional data;
Inputting the shape information and the motion mode of each target in the three-dimensional data into a preset target identification model to obtain a target type corresponding to each target; the object recognition model is used for receiving shape information and motion modes of a plurality of objects, matching the shape information and the motion models of each object, and outputting object types matched with the shape information and the motion modes of each object.
4. The live working scene modeling method according to claim 2, wherein the constructing a three-dimensional environment model corresponding to the target scene includes:
acquiring point cloud data in the three-dimensional data;
constructing an initial environment model according to the point cloud data and the tag data;
and carrying out data fusion on point cloud data corresponding to each target of the initial environment model, and determining the initial environment model subjected to data fusion as the three-dimensional environment model.
5. A live working scene modeling method as claimed in any of claims 1 to 4 wherein the data cleaning of the laser data comprises:
acquiring a point grade evaluation system; the point grade evaluation system comprises a plurality of target categories and evaluation rules corresponding to each target category;
Scoring the performance of each point in the laser data on each target class based on the evaluation rule corresponding to each target class to obtain class scores of each point on each target class;
determining a point with a category score which does not meet the corresponding preset condition as a target point;
and deleting each target point from the laser data, and determining the laser data after deleting the target points as the three-dimensional data.
6. The live working scene modeling method as defined in claim 1, wherein the matching each edge feature point with an edge feature point in the three-dimensional environmental model, respectively, comprises:
for each edge feature point, selecting a plurality of edge feature points closest to the edge feature point in the three-dimensional environment model;
determining residual errors between the edge characteristic points and line segments formed by the selected edge characteristic points;
when determining the residual error corresponding to each edge feature point, determining the matching information of each edge feature point according to the residual error corresponding to each edge feature point, and generating a first matching result according to the matching information of each edge feature point.
7. The live working scene modeling method as defined in claim 1, wherein the matching each planar feature point with a planar feature point in the three-dimensional environmental model, respectively, comprises:
For each plane characteristic point, selecting a plurality of plane characteristic points closest to the plane characteristic point from the three-dimensional environment model;
determining a residual error between the plane characteristic point and a plane formed by the selected plane characteristic point;
when determining the residual error corresponding to each planar feature point, determining the matching information of each planar feature point according to the residual error corresponding to each planar feature point, and generating a second matching result according to the matching information of each planar feature point.
8. A live working scene modeling apparatus, the apparatus comprising:
the data acquisition module is used for acquiring laser data of a target scene at the current moment;
the data processing module is used for carrying out data cleaning on the laser data to obtain three-dimensional data of the target scene;
the model judging module is used for judging whether a three-dimensional environment model corresponding to the target scene exists or not;
the feature point extraction module is used for extracting edge feature points and plane feature points in the three-dimensional data if a three-dimensional environment model corresponding to the target scene exists;
the first matching module is used for respectively matching each edge characteristic point with the edge characteristic points in the three-dimensional environment model to obtain a first matching result;
The second matching module is used for respectively matching each plane characteristic point with the plane characteristic points in the three-dimensional environment model to obtain a second matching result;
and the model updating module is used for updating the three-dimensional environment model according to the first matching result and the second matching result to obtain a new three-dimensional environment model.
9. A storage medium, characterized by: stored in the storage medium are computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the live-job scenario modeling method of any one of claims 1 to 7.
10. A computer device, comprising: one or more processors, and memory;
stored in the memory are computer readable instructions which, when executed by the one or more processors, perform the steps of the live working scenario modeling method of any one of claims 1 to 7.
CN202410018850.8A 2024-01-04 2024-01-04 Live working scene modeling method and device, storage medium and computer equipment Pending CN117830559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410018850.8A CN117830559A (en) 2024-01-04 2024-01-04 Live working scene modeling method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410018850.8A CN117830559A (en) 2024-01-04 2024-01-04 Live working scene modeling method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN117830559A true CN117830559A (en) 2024-04-05

Family

ID=90504014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410018850.8A Pending CN117830559A (en) 2024-01-04 2024-01-04 Live working scene modeling method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN117830559A (en)

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN110531759B (en) Robot exploration path generation method and device, computer equipment and storage medium
CN109509210B (en) Obstacle tracking method and device
US9818195B2 (en) Object pose recognition
US8792726B2 (en) Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus
CN108801268B (en) Target object positioning method and device and robot
Potthast et al. A probabilistic framework for next best view estimation in a cluttered environment
Bosche et al. Automated retrieval of 3D CAD model objects in construction range images
JP5487298B2 (en) 3D image generation
JP6456141B2 (en) Generating map data
CN108038139B (en) Map construction method and device, robot positioning method and device, computer equipment and storage medium
CN111105495A (en) Laser radar mapping method and system fusing visual semantic information
CN112347550A (en) Coupling type indoor three-dimensional semantic graph building and modeling method
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
KR20080075730A (en) Method for esrimating location using objectionrecognition of a robot
CN111781608A (en) Moving target detection method and system based on FMCW laser radar
Zhang et al. Geometrical feature extraction using 2D range scanner
CN111935641B (en) Indoor self-positioning realization method, intelligent mobile device and storage medium
KR100998709B1 (en) A method of robot localization using spatial semantics of objects
Dos Santos et al. Building boundary extraction from LiDAR data using a local estimated parameter for alpha shape algorithm
CN117830559A (en) Live working scene modeling method and device, storage medium and computer equipment
Meng et al. Precise determination of mini railway track with ground based laser scanning
Dierenbach et al. Next-Best-View method based on consecutive evaluation of topological relations
Adan et al. Reconstruction of as-is semantic 3d models of unorganised storehouses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination