CN110322553B - Method and system for lofting implementation of laser radar point cloud mixed reality scene - Google Patents

Method and system for lofting implementation of laser radar point cloud mixed reality scene Download PDF

Info

Publication number
CN110322553B
CN110322553B CN201910621191.6A CN201910621191A CN110322553B CN 110322553 B CN110322553 B CN 110322553B CN 201910621191 A CN201910621191 A CN 201910621191A CN 110322553 B CN110322553 B CN 110322553B
Authority
CN
China
Prior art keywords
point cloud
scene
laser radar
coordinates
radar point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910621191.6A
Other languages
Chinese (zh)
Other versions
CN110322553A (en
Inventor
王师
王滋政
张天巧
隆华平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jiantong Surveying Mapping And Geoinformation Technology Co ltd
Original Assignee
Guangzhou Jiantong Surveying Mapping And Geoinformation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jiantong Surveying Mapping And Geoinformation Technology Co ltd filed Critical Guangzhou Jiantong Surveying Mapping And Geoinformation Technology Co ltd
Priority to CN201910621191.6A priority Critical patent/CN110322553B/en
Publication of CN110322553A publication Critical patent/CN110322553A/en
Application granted granted Critical
Publication of CN110322553B publication Critical patent/CN110322553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Abstract

The application relates to a method and a system for lofting a laser radar point cloud mixed reality scene implementation. The method comprises the following steps: acquiring laser radar point cloud data of an area to be lofted; giving coordinates and attribute treatment to the laser radar point cloud data to obtain a treated laser radar point cloud; constructing a point cloud scene according to the processed laser radar point cloud; registering the point cloud scene with the real scene of the region to be lofted; and setting out on the real scene by taking the coordinates of the registered point cloud scene as a reference. The method directly realizes the integration of high-precision data of different data source scenes, and can perform high-precision mapping or accurate lofting under a real dynamic scene so as to reduce repeated acquisition workload and cost.

Description

Method and system for lofting implementation of laser radar point cloud mixed reality scene
Technical Field
The application relates to the technical field of mixed reality and mapping, in particular to a method, a system, computer equipment and a storage medium for implementing lofting of a laser radar point cloud mixed reality scene.
Background
With the development of Mixed Reality (MR) technology and new generation mapping technology algorithms, MR technology and intelligent terminals are very popular, and have been widely used in various fields to solve various problems. For example, the laser radar point cloud scene and the real scene of the existing three-dimensional coordinates and attribute information are precisely overlapped and fused by utilizing an MR technology, so that the high-precision measurability of the dynamic real scene and the fusion of the three-dimensional laser radar point cloud can be realized simultaneously to obtain the diverse dynamic information of the real scene.
At present, the laser radar point cloud can acquire high-precision three-dimensional static information of a scene instantly through laser radar scanning, and can acquire array geometric figure data of the three-dimensional surfaces of terrains and complex objects in a point cloud mode through an air and ground non-contact high-speed laser measurement mode. And carrying out re-engraving and description on the scene in a three-dimensional laser point cloud mode. Has the characteristics of high precision and high recovery degree. The three-dimensional laser point cloud with the three-dimensional coordinates and information is processed to construct a point cloud scene, so that high-precision mapping, lofting and analysis can be performed.
However, since the laser radar scans acquire instantaneous data, the real scene is a scene which is dynamic with time, and has timeliness. When high-precision mapping or accurate mapping lofting is required to be carried out on a dynamic scene, scanning and acquisition are required again to ensure timeliness. This tends to result in a significant increase in the amount of scanning acquisition effort and cost.
Mixed reality technology (MR) is a further development of virtual reality technology that enhances the realism of the user experience by introducing real scene information in the virtual environment, creating an interactive feedback information loop between the virtual world, the real world and the user. Because the common real scene is free from measuring scale information and absolute coordinate information, the MR technology cannot be directly utilized to conduct absolute coordinate high-precision mapping and lofting on the real scene. Similarly, a measurable scale of absolute coordinates cannot be directly given to a real scene.
The scene described by the high-precision three-dimensional laser point cloud multi-point cloud scene and the real scene are in the existence form of two different dimensions. At present, data information exchange can only be carried out through a third party medium in two scenes, and no realization solution for directly integrating high-precision data of the scenes exists.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a system, a computer device and a storage medium for implementing lofting of a laser radar point cloud mixed reality scene, aiming at the problem that the current three-dimensional laser point cloud composite point cloud scene description scene and the real scene cannot be directly subjected to fusion mapping and lofting.
A method of laser radar point cloud mixed reality scene implementation lofting, the method comprising:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute processing to the laser radar point cloud data to obtain a processed laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with a real scene of the region to be lofted;
and lofting on the real scene by taking the coordinates of the point cloud scene after registration as a reference.
In one embodiment, lofting on the real scene includes:
according to the relative coordinates of the point cloud scene as a reference and according to the corresponding coordinate relation between the point cloud scene and the real scene, relatively lofting is carried out on the real scene;
or (b)
And taking the absolute coordinates of the point cloud scene as a reference, and carrying out absolute coordinate lofting in the real scene.
In one embodiment, the step of acquiring laser radar point cloud data of the region to be lofted includes:
the step of acquiring the laser radar point cloud data of the region to be lofted comprises the following steps:
three-dimensional laser radar point cloud data of the region to be lofted are obtained from each acquisition device,
and merging and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted.
In one embodiment, the step of performing coordinate and attribute assignment processing on the lidar point cloud data includes:
converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
performing monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
attributes are assigned to each set of singulated laser radar point cloud data.
In one embodiment, the step of constructing a point cloud scene according to the processed lidar point cloud includes:
processing the processed laser radar point cloud by adopting a thinning and cutting block method, and constructing a point cloud scene mode of an image pyramid;
grouping the point cloud scene modes, and sequentially loading the point cloud scene modes according to the grouped group numbers;
and constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain the point cloud scene.
In one embodiment, the step of registering the point cloud scene with a real scene of the region to be lofted includes:
performing matrix conversion on the point cloud scene, and throwing the point cloud scene subjected to matrix conversion into a real scene through MR equipment;
and registering the point cloud scene and the real scene by adopting a least square method.
In one embodiment, after the step of registering the point cloud scene with the real scene of the region to be lofted, the method further includes:
relatively mapping the real scene and outputting relative measurement coordinates;
and converting the absolute coordinates of the relative measurement coordinates passing through the point cloud scene, and outputting the absolute measurement coordinates.
A system for laser radar point cloud mixed reality scene implementation lofting, the system comprising:
the data acquisition module is used for acquiring laser radar point cloud data of the region to be lofted;
the point cloud obtaining module is used for giving coordinates and attribute processing to the laser radar point cloud data to obtain processed laser radar point cloud;
the point cloud scene construction module is used for constructing a point cloud scene according to the processed laser radar point cloud;
the registration module is used for registering the point cloud scene with the real scene of the region to be lofted;
and the lofting module is used for lofting the real scene by taking the coordinates of the registered point cloud scene as a reference.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute processing to the laser radar point cloud data to obtain a processed laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with a real scene of the region to be lofted;
and lofting on the real scene by taking the coordinates of the point cloud scene after registration as a reference.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute processing to the laser radar point cloud data to obtain a processed laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with a real scene of the region to be lofted;
and lofting on the real scene by taking the coordinates of the point cloud scene after registration as a reference.
According to the method, the system, the computer equipment and the storage medium for implementing lofting of the laser radar point cloud mixed reality scene, the laser radar point cloud data are firstly obtained, the point cloud data are processed, the point cloud scene is constructed after the processing, then the point cloud scene and the reality scene are fused and registered, direct data information association of the point cloud scene and the reality scene can be established, high-precision data integration of different data source scenes is directly achieved, accurate lofting can be carried out under the real dynamic scene, and repeated acquisition workload and cost are reduced.
Drawings
FIG. 1 is a flow diagram of a method for lofting a laser radar point cloud mixed reality scene in one embodiment;
fig. 2 is a schematic flow chart of point cloud scene and real scene registration in one embodiment;
FIG. 3 is a flow chart of a method for lofting a laser radar point cloud mixed reality scene in one embodiment;
FIG. 4 is a graph of coordinate transformation of a point cloud scene and a real scene in one embodiment;
FIG. 5 is a point cloud scene matrix transition diagram in one embodiment;
FIG. 6 is a flow chart of a method for lofting a laser radar point cloud mixed reality scene in another embodiment;
FIG. 7 is a schematic diagram of a system for laser radar point cloud mixed reality scene implementation lofting in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The application provides a method for lofting a laser radar point cloud mixed reality scene. The method is applied to the terminal, and the terminal can be a computer, an MR intelligent glasses terminal and the like.
In one embodiment, as shown in fig. 1, a method for implementing lofting of a laser radar point cloud mixed reality scene is provided, and the method is applied to a computer as a routine description, and includes the following steps:
step S102, obtaining laser radar point cloud data of an area to be lofted;
the point cloud is a point data set of the appearance surface of the product obtained by a measuring instrument in reverse engineering; the laser radar point cloud data refers to a set of surface point data with a lofting area, which is acquired by laser radar equipment, and the area to be lofted refers to an actual space needing lofting, and can be a room, a building, a mountain, a landscape or a landscape area and the like.
Step S104, giving coordinates and attribute processing to the laser radar point cloud data to obtain processed laser radar point cloud;
specifically, a point cloud obtained according to a laser measurement principle comprises three-dimensional coordinates and laser reflection intensity; wherein the three-dimensional coordinates are usually relative three-dimensional coordinates, and the giving of coordinates to the laser radar point cloud data is usually performed by performing coordinate conversion on the relative three-dimensional coordinates to generate absolute three-dimensional coordinates; and attribute is given to the laser radar point cloud data, including grouping the laser radar point cloud data, giving color, shape, size, use, and the like.
In one specific embodiment, the step of performing coordinate and attribute assignment processing on the laser radar point cloud data includes: converting the relative coordinates of the laser radar point cloud data into absolute coordinates; performing monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer; color attributes are assigned to each set of singulated laser radar point cloud data.
The laser radar point cloud grouping attribute is that the point cloud is subjected to monomerization grouping treatment, and a plurality of point clouds can represent monomerized ground objects after being grouped; and (5) performing thematic processing and color giving processing on laser radar point cloud classification.
Step S106, constructing a point cloud scene according to the processed laser radar point cloud;
the point cloud scene construction is to utilize the processed laser radar point cloud and model by adopting some algorithms to form a point cloud scene.
Step S108, registering the point cloud scene with the real scene of the region to be lofted;
specifically, it is generally referred to that a point cloud scene is loaded and placed in a real scene, and then some algorithms (such as a least square method or a man-machine interaction method, etc.) are adopted to perform fusion matching on the point cloud scene and the real scene, so as to complete registration (as shown in fig. 2).
And step S110, lofting on the real scene by taking the coordinates of the registered point cloud scene as a reference.
In particular, lofting is the creation of a complex three-dimensional object by taking a two-dimensional volumetric object as a cross-section along a path. Different shapes may be imparted on different segments on the same path. Construction of many complex models can be achieved with lofting. The method is used for moving the scheme on the drawing to the actual site in engineering. In this embodiment, the coordinate information of the point cloud scene is used to loft the real scene.
According to the method for implementing lofting of the laser radar point cloud mixed reality scene, the laser radar point cloud data are firstly obtained, the point cloud data are processed, the point cloud scene is constructed after the processing, then the point cloud scene and the reality scene are fused and registered, and direct data information association of the point cloud scene and the reality scene can be established, so that high-precision data integration of different data source scenes is directly achieved, high-precision mapping or accurate lofting can be carried out under the real dynamic scene, and repeated acquisition workload and cost are reduced.
In one embodiment, lofting on a real scene includes:
according to the relative coordinates of the point cloud scene as a reference and the corresponding coordinate relation between the point cloud scene and the real scene, relatively lofting the real scene; or absolute coordinates of the point cloud scene are used as a reference, and absolute coordinate lofting is carried out in the real scene.
Specifically, both absolute coordinates and relative coordinates are important indicators for determining geographic orientation. Play an important role in the positioning of or the location of geographical objects of interest. The different points of the absolute coordinates and the relative coordinates are mastered, and the method has great help for learning geography and improving the geography map reading capability. The absolute coordinates are the terms of which are to be understood as meaning that his standard is relatively fixed, for example the longitude and latitude on earth are absolute coordinates. Such coordinates are characterized by being unchanged by the reference object. The equivalent coordinates are relative coordinates or positions of an object with respect to a reference object, i.e., the object is referenced to the reference object. For example, the absolute coordinates are the absolute coordinate values from the origin of coordinates to your position, no matter where you are currently, the relative coordinates are the absolute coordinates of the point above, and you can subtract the absolute coordinates of the point above from the absolute coordinates of the point above.
In this embodiment, the lofting is divided into equivalent lofting and absolute lofting, wherein equivalent lofting is based on a certain point in the point cloud scene (i.e. relative coordinates), then based on the corresponding coordinate relationship between the point cloud scene and the real scene as determining the position of the relative lofting, and then completing the relative lofting at the position; wherein, the coordinate relation is information such as distance, angle, etc. In addition, the point cloud scene has complete absolute coordinates, the lofting position can be determined in the point cloud scene according to the coordinate information of the object to be lofted, and then the lofting is carried out at the corresponding position in the real scene.
According to the method, accurate lofting can be performed under the real dynamic scene, repeated collection workload and cost are reduced, namely, the real scene can be lofted with high absolute coordinate precision by directly utilizing an MR technology after the point cloud scene is mixed with the real scene, and the efficiency is high and accurate.
In one embodiment, as shown in fig. 3, in the step of acquiring laser radar point cloud data of an area to be lofted, the method includes:
step S202, three-dimensional laser radar point cloud data of an area to be lofted are obtained from each collecting device;
and step S204, merging and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted.
Specifically, the acquisition equipment comprises laser radar equipment such as a manned aircraft, an unmanned aircraft, a ground vehicle-mounted and hand-held on a back. Each acquisition device can acquire laser radar data, and each laser radar data is combined and spliced to obtain laser radar point cloud data of the region to be lofted. By adopting the method, multiple laser radar data can be obtained, so that the obtained laser radar point cloud data of the region to be lofted are more comprehensive and accurate.
In one embodiment, the step of constructing the point cloud scene according to the processed laser radar point cloud includes:
processing the processed laser radar point cloud by adopting a thinning and cutting block method, and constructing a point cloud scene mode of the image pyramid;
grouping the point cloud scene modes, and loading the point cloud scene modes in sequence according to the grouped group numbers;
and constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain a point cloud scene.
Firstly, thinning and cutting block methods are adopted for massive laser radar point cloud data, and a point cloud scene mode similar to a point cloud structure of an image pyramid is constructed; then grouping the laser radar point clouds in the point cloud scene mode, and loading the point cloud scene mode according to the group quantity; and constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain a point cloud scene. Modeling is performed by adopting a multiple method, so that the obtained point cloud scene is more accurate.
In one embodiment, the step of registering the point cloud scene with the real scene of the region to be lofted includes:
performing matrix conversion on the point cloud scene, and throwing the point cloud scene subjected to matrix conversion into a real scene through the MR equipment; registering the point cloud scene and the real scene by adopting a least square method.
Specifically, as shown in fig. 4, a point cloud scene is firstly converted by a matrix, and then is put into a real scene by an MR device; after the point cloud scene is put in, registering the point cloud scene and the real scene is usually carried out by adopting a homonymous point matching method; and when the same name points are matched, the method can be used for registering with the real scene by utilizing a least square method to calculate conversion and a man-machine interaction mode.
In one embodiment, after the step of registering the point cloud scene with the real scene of the region to be lofted, the method further includes:
relatively mapping the real scene and outputting relative measurement coordinates;
specifically, the measuring of the real scene may be generally that feature points are selected in the real scene, and then the feature points are measured to obtain mapping coordinates; the feature points can be selected in a plurality, and the more the feature points are selected, the more accurate the mapping result is.
And converting the absolute coordinates of the relative measurement coordinates passing through the point cloud scene, and outputting the absolute measurement coordinates.
Specifically, the point cloud scene mapping coordinates may be determined according to a relationship between the live-action scene and the point cloud scene.
In one alternative embodiment, the following formula may be used to convert the absolute coordinates of the relative metrology coordinates through the point cloud scene,
y 1 =y+S cosθ
x 1 =x+S sinθ
wherein, the point cloud scene coordinates are (x, y), and the real scene coordinates are (x 1 ,y 1 ) θ is azimuth and S is distance (as shown in fig. 5).
In one embodiment, as shown in fig. 6, the step of registering the point cloud scene with the real scene of the region to be lofted further includes:
step 302, acquiring attribute information and coordinate information from a real scene;
the attribute information comprises laser radar point cloud grouping information, color information and the like, wherein the grouping refers to grouping the laser radar point clouds according to individualization, and the laser radar point cloud data of each group represents a monomer object; the grouping information comprises grouping modes, grouping quantity and the like; the color information refers to the color of the laser radar point cloud in the real scene.
Step 304, carrying out coordinate modification on the laser radar point cloud according to the coordinate information, and carrying out attribute modification on the laser radar point cloud according to the attribute information to obtain a modified point cloud scene;
specifically, when the point cloud scene and the real scene are registered and fused, some differences may exist between the point cloud scene and the real scene, for example, the attribute or the coordinate is different, so in this embodiment, the point cloud scene may be modified according to the attribute information and the coordinate information of the real scene. The method can enable the obtained point cloud scene to be more accurate, so that the registration degree of the point cloud scene and the real scene is higher.
It should be understood that, although the steps in the flowcharts of fig. 1-6 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-6 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, as shown in fig. 7, a system for laser radar point cloud mixed reality scene implementation lofting, the system comprises:
the data acquisition module 10 is used for acquiring laser radar point cloud data of the region to be lofted;
the point cloud obtaining module 20 is configured to assign coordinates and attribute processing to the laser radar point cloud data, so as to obtain a processed laser radar point cloud;
the point cloud scene construction module 30 is configured to construct a point cloud scene according to the processed laser radar point cloud;
the registration module 40 is configured to register the point cloud scene with a real scene of the region to be lofted;
the lofting module 50 is configured to loft a real scene with coordinates of the registered point cloud scene as a reference.
In one embodiment, the lofting module includes:
the relative lofting module is used for relatively lofting the real scene according to the relative coordinates of the point cloud scene as a reference and the corresponding coordinate relation between the point cloud scene and the real scene; or an absolute lofting module, which performs absolute coordinate lofting in the real scene by taking the absolute coordinates of the point cloud scene as a reference.
In one embodiment, the data acquisition module comprises:
the laser radar data acquisition module is used for acquiring three-dimensional laser radar point cloud data of the region to be lofted in each acquisition device;
and the point cloud data acquisition module is used for merging and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted.
In one embodiment, the point cloud obtaining module includes:
the coordinate conversion module is used for converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
the grouping module is used for carrying out monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
and the attribute giving module is used for giving color attributes to each group of the individualized laser radar point cloud data.
In one embodiment, the point cloud scene construction module includes:
the point cloud scene module construction module is used for processing the processed laser radar point cloud by adopting a thinning and cutting block method to construct a point cloud scene mode of the image pyramid;
the point cloud scene mode loading module is used for grouping the point cloud scene modes and loading the point cloud scene modes in sequence according to the number of grouped groups;
the point cloud scene obtaining module is used for constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain a point cloud scene.
In one embodiment, the method further comprises:
the relative measurement coordinate output module is used for relatively mapping the real scene and outputting relative measurement coordinates;
and the absolute measurement coordinate output module is used for converting the absolute coordinates of the relative measurement coordinates passing through the point cloud scene and outputting the absolute measurement coordinates.
In one embodiment, the registration module comprises:
the information extraction module is used for acquiring attribute information and coordinate information from the real scene;
the point cloud scene modification module is used for carrying out coordinate modification on the laser radar point cloud according to the coordinate information and carrying out attribute modification on the laser radar point cloud according to the attribute information to obtain a modified point cloud scene.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing fault instance data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for lofting a laser radar point cloud mixed reality scene.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute treatment to the laser radar point cloud data to obtain a treated laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with the real scene of the region to be lofted;
and setting out on the real scene by taking the coordinates of the registered point cloud scene as a reference.
In one embodiment, the processor, when executing the computer program, performs the steps of:
lofting on a real scene includes:
according to the relative coordinates of the point cloud scene as a reference and the corresponding coordinate relation between the point cloud scene and the real scene, relatively lofting the real scene; or absolute coordinates of the point cloud scene are used as a reference, and absolute coordinate lofting is carried out in the real scene.
In one embodiment, the processor, when executing the computer program, performs the steps of:
three-dimensional laser radar point cloud data of the region to be lofted are obtained from each acquisition device,
and merging and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted.
In one embodiment, the processor, when executing the computer program, performs the steps of:
converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
performing monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
attributes are assigned to each set of singulated lidar point cloud data.
In one embodiment, the processor, when executing the computer program, performs the steps of:
processing the processed laser radar point cloud by adopting a thinning and cutting block method, and constructing a point cloud scene mode of the image pyramid;
grouping the point cloud scene modes, and loading the point cloud scene modes in sequence according to the grouped group numbers;
and constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain a point cloud scene.
In one embodiment, the processor, when executing the computer program, performs the steps of:
after the step of registering the point cloud scene with the real scene of the region to be lofted, the method further comprises the following steps:
relatively mapping the real scene and outputting relative measurement coordinates;
and converting the absolute coordinates of the relative measurement coordinates passing through the cloud scene, and outputting the absolute measurement coordinates.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute treatment to the laser radar point cloud data to obtain a treated laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with the real scene of the region to be lofted;
and setting out on the real scene by taking the coordinates of the registered point cloud scene as a reference.
In one embodiment, the computer program when executed by a processor performs the steps of:
lofting on a real scene includes:
according to the relative coordinates of the point cloud scene as a reference and the corresponding coordinate relation between the point cloud scene and the real scene, relatively lofting the real scene; or absolute coordinates of the point cloud scene are used as a reference, and absolute coordinate lofting is carried out in the real scene.
In one embodiment, the computer program when executed by a processor performs the steps of:
the step of acquiring the laser radar point cloud data of the region to be lofted comprises the following steps:
three-dimensional laser radar point cloud data of the region to be lofted are obtained from each acquisition device,
and merging and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted.
In one embodiment, the computer program when executed by a processor performs the steps of:
converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
performing monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
attributes are assigned to each set of singulated lidar point cloud data.
In one embodiment, the computer program when executed by a processor performs the steps of:
processing the processed laser radar point cloud by adopting a thinning and cutting block method, and constructing a point cloud scene mode of the image pyramid;
grouping the point cloud scene modes, and loading the point cloud scene modes in sequence according to the grouped group numbers;
and constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain a point cloud scene.
In one embodiment, the computer program when executed by a processor performs the steps of:
performing matrix conversion on the point cloud scene, and throwing the point cloud scene subjected to matrix conversion into a real scene through the MR equipment;
registering the point cloud scene and the real scene by adopting a least square method.
In one embodiment, the computer program when executed by a processor performs the steps of:
after the step of registering the point cloud scene with the real scene of the region to be lofted, the method further comprises the following steps:
relatively mapping the real scene and outputting relative measurement coordinates;
and converting the absolute coordinates of the relative measurement coordinates passing through the cloud scene, and outputting the absolute measurement coordinates.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (4)

1. A method of laser radar point cloud mixed reality scene implementation lofting, the method comprising:
acquiring laser radar point cloud data of an area to be lofted;
giving coordinates and attribute processing to the laser radar point cloud data to obtain a processed laser radar point cloud;
constructing a point cloud scene according to the processed laser radar point cloud;
registering the point cloud scene with a real scene of the region to be lofted;
setting out on the real scene by taking the coordinates of the registered point cloud scene as a reference;
the step of acquiring the laser radar point cloud data of the region to be lofted comprises the following steps:
three-dimensional laser radar point cloud data of the region to be lofted are obtained from each acquisition device,
combining and splicing the three-dimensional laser radar point cloud data to obtain the laser radar point cloud data of the region to be lofted;
the step of assigning coordinates and attributes to the lidar point cloud data includes:
converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
performing monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
assigning color attributes to each set of individualized laser radar point cloud data;
the step of constructing the point cloud scene according to the processed laser radar point cloud comprises the following steps:
processing the processed laser radar point cloud by adopting a thinning and cutting block method, and constructing a point cloud scene mode of an image pyramid;
grouping the point cloud scene modes, and sequentially loading the point cloud scene modes according to the grouped group numbers;
constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to obtain the point cloud scene;
the registering step of the point cloud scene and the real scene of the region to be lofted comprises the following steps:
performing matrix conversion on the point cloud scene, and throwing the point cloud scene subjected to matrix conversion into a real scene through MR equipment;
registering the point cloud scene and the real scene by adopting a least square method;
in the step of registering the point cloud scene with the real scene of the region to be lofted, the method further comprises:
acquiring attribute information and coordinate information from a real scene;
carrying out coordinate modification on the laser radar point cloud according to the coordinate information, and carrying out attribute modification on the laser radar point cloud according to the attribute information to obtain a modified point cloud scene;
after the step of registering the point cloud scene with the real scene of the region to be lofted, the method further comprises:
relatively mapping the real scene and outputting relative measurement coordinates;
converting the absolute coordinates of the relative measurement coordinates passing through the point cloud scene, and outputting the absolute measurement coordinates;
the following equation is used to convert the absolute coordinates of the relative measured coordinates passing through the point cloud scene,
y 1 =y+S cosθ,
x 1 =x+S sinθ,
wherein, the point cloud scene coordinates are (x, y), and the real scene coordinates are (x 1 ,y 1 ) θ is azimuth, S is distance;
lofting on the real scene includes:
according to the relative coordinates of the point cloud scene as a reference and according to the corresponding coordinate relation between the point cloud scene and the real scene, relatively lofting is carried out on the real scene; or according to the absolute coordinates of the point cloud scene as a reference, absolute coordinate lofting is carried out in the real scene.
2. A system for laser radar point cloud mixed reality scene implementation lofting, the system comprising:
the data acquisition module is used for acquiring laser radar point cloud data of the region to be lofted;
the point cloud obtaining module is used for giving coordinates and attribute processing to the laser radar point cloud data to obtain processed laser radar point cloud;
the point cloud scene construction module is used for constructing a point cloud scene according to the processed laser radar point cloud;
the registration module is used for registering the point cloud scene with the real scene of the region to be lofted;
the registering step of the point cloud scene and the real scene of the region to be lofted comprises the following steps:
performing matrix conversion on the point cloud scene, and throwing the point cloud scene subjected to matrix conversion into a real scene through MR equipment;
registering the point cloud scene and the real scene by adopting a least square method;
the lofting module is used for lofting the real scene by taking the coordinates of the registered point cloud scene as a reference;
the data acquisition module comprises:
the laser radar data acquisition module is used for acquiring three-dimensional laser radar point cloud data of the region to be lofted in each acquisition device;
the point cloud data acquisition module is used for merging and splicing the three-dimensional laser radar point cloud data to obtain laser radar point cloud data of the region to be lofted;
the point cloud obtaining module comprises:
the coordinate conversion module is used for converting the relative coordinates of the laser radar point cloud data into absolute coordinates;
the grouping module is used for carrying out monomerization grouping processing on the laser radar point cloud data, wherein the laser radar point cloud data of each group represents a monometer;
the attribute giving module is used for giving color attributes to each group of the individualized laser radar point cloud data;
the point cloud scene construction module comprises:
the point cloud scene module construction module is used for processing the processed laser radar point cloud by adopting a thinning and cutting block method to construct a point cloud scene mode of the image pyramid;
the point cloud scene mode loading module is used for grouping the point cloud scene modes and loading the point cloud scene modes in sequence according to the number of grouped groups;
the point cloud scene acquisition module is used for constructing and optimizing the loaded point cloud scene mode by adopting a quadtree space index method to acquire a point cloud scene;
the registration module includes:
the information extraction module is used for acquiring attribute information and coordinate information from the real scene;
the point cloud scene modification module is used for carrying out coordinate modification on the laser radar point cloud according to the coordinate information and carrying out attribute modification on the laser radar point cloud according to the attribute information to obtain a modified point cloud scene;
the relative measurement coordinate output module is used for relatively mapping the real scene and outputting relative measurement coordinates;
the absolute measurement coordinate output module is used for converting the absolute coordinates of the relative measurement coordinates passing through the point cloud scene and outputting the absolute measurement coordinates;
the following equation is used to convert the absolute coordinates of the relative measured coordinates passing through the point cloud scene,
y 1 =y+S cosθ,
x 1 =x+S sinθ,
wherein, the point cloud scene coordinates are (x, y), and the real scene coordinates are (x 1 ,y 1 ) θ is azimuth, S is distance;
the lofting module includes:
the relative lofting module is used for relatively lofting the real scene according to the relative coordinates of the point cloud scene as a reference and the corresponding coordinate relation between the point cloud scene and the real scene; or an absolute lofting module, which performs absolute coordinate lofting in the real scene according to the absolute coordinates of the point cloud scene as a reference.
3. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of claim 1 when executing the computer program.
4. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of claim 1.
CN201910621191.6A 2019-07-10 2019-07-10 Method and system for lofting implementation of laser radar point cloud mixed reality scene Active CN110322553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910621191.6A CN110322553B (en) 2019-07-10 2019-07-10 Method and system for lofting implementation of laser radar point cloud mixed reality scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621191.6A CN110322553B (en) 2019-07-10 2019-07-10 Method and system for lofting implementation of laser radar point cloud mixed reality scene

Publications (2)

Publication Number Publication Date
CN110322553A CN110322553A (en) 2019-10-11
CN110322553B true CN110322553B (en) 2024-04-02

Family

ID=68123178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621191.6A Active CN110322553B (en) 2019-07-10 2019-07-10 Method and system for lofting implementation of laser radar point cloud mixed reality scene

Country Status (1)

Country Link
CN (1) CN110322553B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862882A (en) * 2021-01-28 2021-05-28 北京格灵深瞳信息技术股份有限公司 Target distance measuring method, device, electronic apparatus and storage medium
CN117368869B (en) * 2023-12-06 2024-03-19 航天宏图信息技术股份有限公司 Visualization method, device, equipment and medium for radar three-dimensional power range

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366250A (en) * 2013-07-12 2013-10-23 中国科学院深圳先进技术研究院 City appearance environment detection method and system based on three-dimensional live-action data
CN107392944A (en) * 2017-08-07 2017-11-24 广东电网有限责任公司机巡作业中心 Full-view image and the method for registering and device for putting cloud
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN108665536A (en) * 2018-05-14 2018-10-16 广州市城市规划勘测设计研究院 Three-dimensional and live-action data method for visualizing, device and computer readable storage medium
CN109003326A (en) * 2018-06-05 2018-12-14 湖北亿咖通科技有限公司 A kind of virtual laser radar data generation method based on virtual world
CN109523578A (en) * 2018-08-27 2019-03-26 中铁上海工程局集团有限公司 A kind of matching process of bim model and point cloud data
CN109633665A (en) * 2018-12-17 2019-04-16 北京主线科技有限公司 The sparse laser point cloud joining method of traffic scene
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945198B (en) * 2016-10-13 2021-02-23 北京百度网讯科技有限公司 Method and device for marking point cloud data
US10546427B2 (en) * 2017-02-15 2020-01-28 Faro Technologies, Inc System and method of generating virtual reality data from a three-dimensional point cloud
CN107093210B (en) * 2017-04-20 2021-07-16 北京图森智途科技有限公司 Laser point cloud labeling method and device
CN108230379B (en) * 2017-12-29 2020-12-04 百度在线网络技术(北京)有限公司 Method and device for fusing point cloud data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366250A (en) * 2013-07-12 2013-10-23 中国科学院深圳先进技术研究院 City appearance environment detection method and system based on three-dimensional live-action data
CN107392944A (en) * 2017-08-07 2017-11-24 广东电网有限责任公司机巡作业中心 Full-view image and the method for registering and device for putting cloud
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN108665536A (en) * 2018-05-14 2018-10-16 广州市城市规划勘测设计研究院 Three-dimensional and live-action data method for visualizing, device and computer readable storage medium
CN109003326A (en) * 2018-06-05 2018-12-14 湖北亿咖通科技有限公司 A kind of virtual laser radar data generation method based on virtual world
CN109523578A (en) * 2018-08-27 2019-03-26 中铁上海工程局集团有限公司 A kind of matching process of bim model and point cloud data
CN109633665A (en) * 2018-12-17 2019-04-16 北京主线科技有限公司 The sparse laser point cloud joining method of traffic scene
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于粒子群优化的点云场景拼接算法;张军等;《国防科技大学学报》;第35卷(第5期);第174-179页 *
海量激光雷达点云数据的多尺度可视化高效管理;余飞;《工程勘察》;20160930;第69-73页 *

Also Published As

Publication number Publication date
CN110322553A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
Bosch et al. A multiple view stereo benchmark for satellite imagery
US10297074B2 (en) Three-dimensional modeling from optical capture
US20100207936A1 (en) Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment
CN106097348A (en) A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
KR20180053724A (en) Method for encoding bright-field content
CN110322553B (en) Method and system for lofting implementation of laser radar point cloud mixed reality scene
CN105913372A (en) Two-dimensional room plane graph to three-dimensional graph conversion method and system thereof
CN112905831A (en) Method and system for acquiring coordinates of object in virtual scene and electronic equipment
Niu et al. 3d foot reconstruction based on mobile phone photographing
CN110162812B (en) Target sample generation method based on infrared simulation
CN114413849A (en) Three-dimensional geographic information data processing method and device for power transmission and transformation project
Pyka et al. LiDAR-based method for analysing landmark visibility to pedestrians in cities: case study in Kraków, Poland
Zhang et al. Natural forest ALS-TLS point cloud data registration without control points
Zhang et al. Three-dimensional modeling and indoor positioning for urban emergency response
CN113936106A (en) Three-dimensional visualization method and system of monitoring map and related equipment
Lu Algorithm of 3D virtual reconstruction of ancient buildings in Qing dynasty based on image sequence
Jazayeri Trends in 3D land information collection and management
CN112509133A (en) Three-dimensional reservoir high-definition live-action display method based on GIS
Comes et al. From theory to practice: digital reconstruction and virtual reality in archaeology
Kim et al. Data simulation of an airborne lidar system
Ni et al. A method for the registration of multiview range images acquired in forest areas using a terrestrial laser scanner
CN117036511B (en) Calibration method and device for multi-type sensor, computer equipment and storage medium
Zeng et al. 3D model reconstruction based on close-range photogrammetry
Sedlacek et al. 3D reconstruction data set-The Langweil model of Prague
CN116828485B (en) UWB base station three-dimensional layout method and system suitable for complex environment in factory building

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant